1 安装

参考 zhuanlan

tensorflow 下载需要 40分钟,目前很快了。

2 常数、变量、矩阵相乘、梯度下降

参考 Hull (2019)

2.1 Use tensorflow like numpy

(2, 3) <dtype: 'float32'>
tf.Tensor([2. 5.], shape=(2,), dtype=float32)
tf.Tensor([2. 5.], shape=(2,), dtype=float32)
tf.Tensor(
[[11. 12. 13.]
 [14. 15. 16.]], shape=(2, 3), dtype=float32)
tf.Tensor(
[[ 1.  4.  9.]
 [16. 25. 36.]], shape=(2, 3), dtype=float32)

o

2.3 Variables

array([1, 2, 3, 4], dtype=int32)

这里有一个性质,constants 是不能发生改变的,但是 variables 是可以的。

2.4 Element-wise multiplication

Element-wise multiplication in TensorFlow is performed using two tensors with identical shapes. This is because the operation multiplies elements in corresponding positions in the two tensors.

元素运算需要 dim 一致,因为需要位置一致。

\[\left[ \begin{array}{ll}{1} & {2} \\ {2} & {1}\end{array}\right] \odot \left[ \begin{array}{ll}{3} & {1} \\ {2} & {5}\end{array}\right]=\left[ \begin{array}{ll}{3} & {2} \\ {4} & {5}\end{array}\right]\]

[1 2 3 4]
[[1 2 3]
 [1 6 4]]

2.5 Matrix multiplication

[[ 1]
 [ 0]
 [-1]
 [-3]]

Nice job! Understanding matrix multiplication will make things simpler when we start making predictions with linear models.

OLS 回归的本质就是矩阵相乘。

2.6 group by to agg

reduce_sum(A, i) sums over dimension i

\[W=\left[ \begin{array}{ccccc}{11} & {7} & {4} & {3} & {25} \\ {50} & {2} & {60} & {0} & {10}\end{array}\right]\]

[61  9 64  3 35]
[ 50 122]

Understanding how to sum over tensor dimensions will be helpful when preparing datasets and training models.

汇总运算便于计算。

2.7 reshape

\[H = \left[ \begin{array}{ccc}{255} & {0} & {255} \\ {255} & {255} & {255} \\ {255} & {0} & {255}\end{array}\right]\]

a 9-pixel, grayscale image of the letter H

比如这个 \(H\),需要将其变成一维数据。

array([[255],
       [  0],
       [255],
       [255],
       [255],
       [255],
       [255],
       [  0],
       [255]], dtype=int32)

reshape 可以帮助对输入进行处理。

2.8 Gradient descent

这里可以进行求导。

array([0.], dtype=float32)

在 Win7上可以安装,但是在执行import tensorflow时,Python会闪退,因此推荐搭建虚拟机,用Linux系统,见 Linux 学习笔记

2.9 constant 和 variable 的差异

一般地,不会手动建立 tf.Variabletf.Variable类似于tf.constant,但是后者不能被修改,前者可以。 方便使用者在进行反向传播更新 weights 时,使用。

<tf.Variable 'UnreadVariable' shape=(2, 3) dtype=int32, numpy=
array([[-3, -2, -1],
       [ 0,  1,  2]], dtype=int32)>
<tf.Variable 'Variable:0' shape=(2, 3) dtype=int32, numpy=
array([[-3, -2, -1],
       [ 0,  1,  2]], dtype=int32)>

3 线性回归

参考 Hull (2019)

3.1 定义变量类型

类 sql 的写法。

'2.0.0-alpha0'

3.2 乘法

count    2.161300e+04
mean     5.400881e+05
std      3.671272e+05
min      7.500000e+04
25%      3.219500e+05
50%      4.500000e+05
75%      6.450000e+05
max      7.700000e+06
Name: price, dtype: float64
<tf.Tensor: id=2, shape=(21613,), dtype=float32, numpy=
array([22190. , 53800. , 18000. , ..., 40210.1, 40000. , 32500. ],
      dtype=float32)>

为什么这里定义了 pd.Series 就可以广播了?

如果不是 pd.Series 那么就要把 scalar 设置成 tf.Variable

<tf.Tensor: id=48, shape=(3,), dtype=float32, numpy=array([0.1, 0.2, 0.3], dtype=float32)>

3.3 loss function

-98150

这是损失函数的定义,那么只要找到一个优化函数就可以迭代了。

3.5 实现线性回归

One of the benefits of using tensorflow is that you have the option to customize models down to the linear algebraic-level, as we’ve shown in the last two exercises (Hull 2019)

tensorflow 可以帮助在线性代数的层面上理解神经网络。

其中的 0.1 是初始值。

<tf.Variable 'UnreadVariable' shape=() dtype=int64, numpy=1>
0.101 0.101

这是迭代了一次的结果。

如何理解这个 for loop?优化汉书根据损失函数定义的损失进行最小化,然后只不过再print出来每次迭代的结果。当然最后的beta0和beta1是有湖

tf.Tensor(426196570000.0, shape=(), dtype=float32)
tf.Tensor(426193780000.0, shape=(), dtype=float32)
tf.Tensor(426191160000.0, shape=(), dtype=float32)
tf.Tensor(426188370000.0, shape=(), dtype=float32)
tf.Tensor(426185620000.0, shape=(), dtype=float32)
tf.Tensor(426182930000.0, shape=(), dtype=float32)
tf.Tensor(426180280000.0, shape=(), dtype=float32)
tf.Tensor(426177530000.0, shape=(), dtype=float32)
tf.Tensor(426174800000.0, shape=(), dtype=float32)
tf.Tensor(426172060000.0, shape=(), dtype=float32)
tf.Tensor(426169400000.0, shape=(), dtype=float32)
tf.Tensor(426166650000.0, shape=(), dtype=float32)
tf.Tensor(426163860000.0, shape=(), dtype=float32)
tf.Tensor(426161180000.0, shape=(), dtype=float32)
tf.Tensor(426158520000.0, shape=(), dtype=float32)
tf.Tensor(426155840000.0, shape=(), dtype=float32)
tf.Tensor(426153050000.0, shape=(), dtype=float32)
tf.Tensor(426150360000.0, shape=(), dtype=float32)
tf.Tensor(426147600000.0, shape=(), dtype=float32)
tf.Tensor(426144900000.0, shape=(), dtype=float32)
tf.Tensor(426142100000.0, shape=(), dtype=float32)
tf.Tensor(426139420000.0, shape=(), dtype=float32)
tf.Tensor(426136730000.0, shape=(), dtype=float32)
tf.Tensor(426133980000.0, shape=(), dtype=float32)
tf.Tensor(426131230000.0, shape=(), dtype=float32)
tf.Tensor(426128540000.0, shape=(), dtype=float32)
tf.Tensor(426125820000.0, shape=(), dtype=float32)
tf.Tensor(426123100000.0, shape=(), dtype=float32)
tf.Tensor(426120450000.0, shape=(), dtype=float32)
tf.Tensor(426117760000.0, shape=(), dtype=float32)
tf.Tensor(426114970000.0, shape=(), dtype=float32)
tf.Tensor(426112250000.0, shape=(), dtype=float32)
tf.Tensor(426109570000.0, shape=(), dtype=float32)
tf.Tensor(426106850000.0, shape=(), dtype=float32)
tf.Tensor(426104100000.0, shape=(), dtype=float32)
tf.Tensor(426101370000.0, shape=(), dtype=float32)
tf.Tensor(426098700000.0, shape=(), dtype=float32)
tf.Tensor(426095870000.0, shape=(), dtype=float32)
tf.Tensor(426093180000.0, shape=(), dtype=float32)
tf.Tensor(426090430000.0, shape=(), dtype=float32)
tf.Tensor(426087780000.0, shape=(), dtype=float32)
tf.Tensor(426085100000.0, shape=(), dtype=float32)
tf.Tensor(426082400000.0, shape=(), dtype=float32)
tf.Tensor(426079620000.0, shape=(), dtype=float32)
tf.Tensor(426076930000.0, shape=(), dtype=float32)
tf.Tensor(426074140000.0, shape=(), dtype=float32)
tf.Tensor(426071430000.0, shape=(), dtype=float32)
tf.Tensor(426068670000.0, shape=(), dtype=float32)
tf.Tensor(426065950000.0, shape=(), dtype=float32)
tf.Tensor(426063270000.0, shape=(), dtype=float32)
tf.Tensor(426060550000.0, shape=(), dtype=float32)
tf.Tensor(426057860000.0, shape=(), dtype=float32)
tf.Tensor(426055070000.0, shape=(), dtype=float32)
tf.Tensor(426052350000.0, shape=(), dtype=float32)
tf.Tensor(426049700000.0, shape=(), dtype=float32)
tf.Tensor(426047000000.0, shape=(), dtype=float32)
tf.Tensor(426044300000.0, shape=(), dtype=float32)
tf.Tensor(426041670000.0, shape=(), dtype=float32)
tf.Tensor(426038900000.0, shape=(), dtype=float32)
tf.Tensor(426036100000.0, shape=(), dtype=float32)
tf.Tensor(426033400000.0, shape=(), dtype=float32)
tf.Tensor(426030700000.0, shape=(), dtype=float32)
tf.Tensor(426027940000.0, shape=(), dtype=float32)
tf.Tensor(426025220000.0, shape=(), dtype=float32)
tf.Tensor(426022470000.0, shape=(), dtype=float32)
tf.Tensor(426019800000.0, shape=(), dtype=float32)
tf.Tensor(426017060000.0, shape=(), dtype=float32)
tf.Tensor(426014340000.0, shape=(), dtype=float32)
tf.Tensor(426011660000.0, shape=(), dtype=float32)
tf.Tensor(426008970000.0, shape=(), dtype=float32)
tf.Tensor(426006200000.0, shape=(), dtype=float32)
tf.Tensor(426003500000.0, shape=(), dtype=float32)
tf.Tensor(426000700000.0, shape=(), dtype=float32)
tf.Tensor(425998020000.0, shape=(), dtype=float32)
tf.Tensor(425995270000.0, shape=(), dtype=float32)
tf.Tensor(425992600000.0, shape=(), dtype=float32)
tf.Tensor(425989830000.0, shape=(), dtype=float32)
tf.Tensor(425987200000.0, shape=(), dtype=float32)
tf.Tensor(425984430000.0, shape=(), dtype=float32)
tf.Tensor(425981740000.0, shape=(), dtype=float32)
tf.Tensor(425979020000.0, shape=(), dtype=float32)
tf.Tensor(425976270000.0, shape=(), dtype=float32)
tf.Tensor(425973600000.0, shape=(), dtype=float32)
tf.Tensor(425970830000.0, shape=(), dtype=float32)
tf.Tensor(425968140000.0, shape=(), dtype=float32)
tf.Tensor(425965360000.0, shape=(), dtype=float32)
tf.Tensor(425962700000.0, shape=(), dtype=float32)
tf.Tensor(425959980000.0, shape=(), dtype=float32)
tf.Tensor(425957260000.0, shape=(), dtype=float32)
tf.Tensor(425954500000.0, shape=(), dtype=float32)
tf.Tensor(425951820000.0, shape=(), dtype=float32)
tf.Tensor(425949070000.0, shape=(), dtype=float32)
tf.Tensor(425946320000.0, shape=(), dtype=float32)
tf.Tensor(425943600000.0, shape=(), dtype=float32)
tf.Tensor(425940900000.0, shape=(), dtype=float32)
tf.Tensor(425938300000.0, shape=(), dtype=float32)
tf.Tensor(425935540000.0, shape=(), dtype=float32)
tf.Tensor(425932750000.0, shape=(), dtype=float32)
tf.Tensor(425930030000.0, shape=(), dtype=float32)
tf.Tensor(425927340000.0, shape=(), dtype=float32)
tf.Tensor(425924620000.0, shape=(), dtype=float32)
tf.Tensor(425921940000.0, shape=(), dtype=float32)
tf.Tensor(425919250000.0, shape=(), dtype=float32)
tf.Tensor(425916470000.0, shape=(), dtype=float32)
tf.Tensor(425913780000.0, shape=(), dtype=float32)
tf.Tensor(425911060000.0, shape=(), dtype=float32)
tf.Tensor(425908200000.0, shape=(), dtype=float32)
tf.Tensor(425905600000.0, shape=(), dtype=float32)
tf.Tensor(425902870000.0, shape=(), dtype=float32)
tf.Tensor(425900180000.0, shape=(), dtype=float32)
tf.Tensor(425897500000.0, shape=(), dtype=float32)
tf.Tensor(425894700000.0, shape=(), dtype=float32)
tf.Tensor(425892000000.0, shape=(), dtype=float32)
tf.Tensor(425889230000.0, shape=(), dtype=float32)
tf.Tensor(425886550000.0, shape=(), dtype=float32)
tf.Tensor(425883830000.0, shape=(), dtype=float32)
tf.Tensor(425881200000.0, shape=(), dtype=float32)
tf.Tensor(425878420000.0, shape=(), dtype=float32)
tf.Tensor(425875640000.0, shape=(), dtype=float32)
tf.Tensor(425872950000.0, shape=(), dtype=float32)
tf.Tensor(425870200000.0, shape=(), dtype=float32)
tf.Tensor(425867540000.0, shape=(), dtype=float32)
tf.Tensor(425864800000.0, shape=(), dtype=float32)
tf.Tensor(425862100000.0, shape=(), dtype=float32)
tf.Tensor(425859400000.0, shape=(), dtype=float32)
tf.Tensor(425856660000.0, shape=(), dtype=float32)
tf.Tensor(425853980000.0, shape=(), dtype=float32)
tf.Tensor(425851300000.0, shape=(), dtype=float32)
tf.Tensor(425848500000.0, shape=(), dtype=float32)
tf.Tensor(425845820000.0, shape=(), dtype=float32)
tf.Tensor(425843100000.0, shape=(), dtype=float32)
tf.Tensor(425840350000.0, shape=(), dtype=float32)
tf.Tensor(425837560000.0, shape=(), dtype=float32)
tf.Tensor(425834870000.0, shape=(), dtype=float32)
tf.Tensor(425832220000.0, shape=(), dtype=float32)
tf.Tensor(425829530000.0, shape=(), dtype=float32)
tf.Tensor(425826780000.0, shape=(), dtype=float32)
tf.Tensor(425824060000.0, shape=(), dtype=float32)
tf.Tensor(425821370000.0, shape=(), dtype=float32)
tf.Tensor(425818550000.0, shape=(), dtype=float32)
tf.Tensor(425815870000.0, shape=(), dtype=float32)
tf.Tensor(425813100000.0, shape=(), dtype=float32)
tf.Tensor(425810460000.0, shape=(), dtype=float32)
tf.Tensor(425807770000.0, shape=(), dtype=float32)
tf.Tensor(425805000000.0, shape=(), dtype=float32)
tf.Tensor(425802300000.0, shape=(), dtype=float32)
tf.Tensor(425799600000.0, shape=(), dtype=float32)
tf.Tensor(425796830000.0, shape=(), dtype=float32)
tf.Tensor(425794140000.0, shape=(), dtype=float32)
tf.Tensor(425791460000.0, shape=(), dtype=float32)
tf.Tensor(425788670000.0, shape=(), dtype=float32)
tf.Tensor(425786000000.0, shape=(), dtype=float32)
tf.Tensor(425783260000.0, shape=(), dtype=float32)
tf.Tensor(425780540000.0, shape=(), dtype=float32)
tf.Tensor(425777860000.0, shape=(), dtype=float32)
tf.Tensor(425775140000.0, shape=(), dtype=float32)
tf.Tensor(425772400000.0, shape=(), dtype=float32)
tf.Tensor(425769700000.0, shape=(), dtype=float32)
tf.Tensor(425767000000.0, shape=(), dtype=float32)
tf.Tensor(425764360000.0, shape=(), dtype=float32)
tf.Tensor(425761570000.0, shape=(), dtype=float32)
tf.Tensor(425758800000.0, shape=(), dtype=float32)
tf.Tensor(425756130000.0, shape=(), dtype=float32)
tf.Tensor(425753380000.0, shape=(), dtype=float32)
tf.Tensor(425750700000.0, shape=(), dtype=float32)
tf.Tensor(425747940000.0, shape=(), dtype=float32)
tf.Tensor(425745250000.0, shape=(), dtype=float32)
tf.Tensor(425742530000.0, shape=(), dtype=float32)
tf.Tensor(425739800000.0, shape=(), dtype=float32)
tf.Tensor(425737130000.0, shape=(), dtype=float32)
tf.Tensor(425734340000.0, shape=(), dtype=float32)
tf.Tensor(425731600000.0, shape=(), dtype=float32)
tf.Tensor(425728970000.0, shape=(), dtype=float32)
tf.Tensor(425726250000.0, shape=(), dtype=float32)
tf.Tensor(425723500000.0, shape=(), dtype=float32)
tf.Tensor(425720770000.0, shape=(), dtype=float32)
tf.Tensor(425718050000.0, shape=(), dtype=float32)
tf.Tensor(425715300000.0, shape=(), dtype=float32)
tf.Tensor(425712600000.0, shape=(), dtype=float32)
tf.Tensor(425709900000.0, shape=(), dtype=float32)
tf.Tensor(425707180000.0, shape=(), dtype=float32)
tf.Tensor(425704420000.0, shape=(), dtype=float32)
tf.Tensor(425701700000.0, shape=(), dtype=float32)
tf.Tensor(425698950000.0, shape=(), dtype=float32)
tf.Tensor(425696300000.0, shape=(), dtype=float32)
tf.Tensor(425693600000.0, shape=(), dtype=float32)
tf.Tensor(425690860000.0, shape=(), dtype=float32)
tf.Tensor(425688240000.0, shape=(), dtype=float32)
tf.Tensor(425685450000.0, shape=(), dtype=float32)
tf.Tensor(425682760000.0, shape=(), dtype=float32)
tf.Tensor(425679980000.0, shape=(), dtype=float32)
tf.Tensor(425677300000.0, shape=(), dtype=float32)
tf.Tensor(425674600000.0, shape=(), dtype=float32)
tf.Tensor(425671850000.0, shape=(), dtype=float32)
tf.Tensor(425669100000.0, shape=(), dtype=float32)
tf.Tensor(425666380000.0, shape=(), dtype=float32)
tf.Tensor(425663700000.0, shape=(), dtype=float32)
tf.Tensor(425660900000.0, shape=(), dtype=float32)
tf.Tensor(425658220000.0, shape=(), dtype=float32)
tf.Tensor(425655530000.0, shape=(), dtype=float32)
tf.Tensor(425652850000.0, shape=(), dtype=float32)
tf.Tensor(425650200000.0, shape=(), dtype=float32)
tf.Tensor(425647440000.0, shape=(), dtype=float32)
tf.Tensor(425644720000.0, shape=(), dtype=float32)
tf.Tensor(425641930000.0, shape=(), dtype=float32)
tf.Tensor(425639250000.0, shape=(), dtype=float32)
tf.Tensor(425636530000.0, shape=(), dtype=float32)
tf.Tensor(425633780000.0, shape=(), dtype=float32)
tf.Tensor(425631100000.0, shape=(), dtype=float32)
tf.Tensor(425628370000.0, shape=(), dtype=float32)
tf.Tensor(425625650000.0, shape=(), dtype=float32)
tf.Tensor(425622900000.0, shape=(), dtype=float32)
tf.Tensor(425620180000.0, shape=(), dtype=float32)
tf.Tensor(425617500000.0, shape=(), dtype=float32)
tf.Tensor(425614770000.0, shape=(), dtype=float32)
tf.Tensor(425612020000.0, shape=(), dtype=float32)
tf.Tensor(425609330000.0, shape=(), dtype=float32)
tf.Tensor(425606640000.0, shape=(), dtype=float32)
tf.Tensor(425604000000.0, shape=(), dtype=float32)
tf.Tensor(425601240000.0, shape=(), dtype=float32)
tf.Tensor(425598520000.0, shape=(), dtype=float32)
tf.Tensor(425595830000.0, shape=(), dtype=float32)
tf.Tensor(425593040000.0, shape=(), dtype=float32)
tf.Tensor(425590360000.0, shape=(), dtype=float32)
tf.Tensor(425587640000.0, shape=(), dtype=float32)
tf.Tensor(425584950000.0, shape=(), dtype=float32)
tf.Tensor(425582100000.0, shape=(), dtype=float32)
tf.Tensor(425579450000.0, shape=(), dtype=float32)
tf.Tensor(425576760000.0, shape=(), dtype=float32)
tf.Tensor(425574070000.0, shape=(), dtype=float32)
tf.Tensor(425571300000.0, shape=(), dtype=float32)
tf.Tensor(425568600000.0, shape=(), dtype=float32)
tf.Tensor(425565880000.0, shape=(), dtype=float32)
tf.Tensor(425563130000.0, shape=(), dtype=float32)
tf.Tensor(425560440000.0, shape=(), dtype=float32)
tf.Tensor(425557800000.0, shape=(), dtype=float32)
tf.Tensor(425555000000.0, shape=(), dtype=float32)
tf.Tensor(425552300000.0, shape=(), dtype=float32)
tf.Tensor(425549530000.0, shape=(), dtype=float32)
tf.Tensor(425546900000.0, shape=(), dtype=float32)
tf.Tensor(425544160000.0, shape=(), dtype=float32)
tf.Tensor(425541470000.0, shape=(), dtype=float32)
tf.Tensor(425538680000.0, shape=(), dtype=float32)
tf.Tensor(425536000000.0, shape=(), dtype=float32)
tf.Tensor(425533340000.0, shape=(), dtype=float32)
tf.Tensor(425530600000.0, shape=(), dtype=float32)
tf.Tensor(425527800000.0, shape=(), dtype=float32)
tf.Tensor(425525120000.0, shape=(), dtype=float32)
tf.Tensor(425522460000.0, shape=(), dtype=float32)
tf.Tensor(425519700000.0, shape=(), dtype=float32)
tf.Tensor(425516920000.0, shape=(), dtype=float32)
tf.Tensor(425514300000.0, shape=(), dtype=float32)
tf.Tensor(425511620000.0, shape=(), dtype=float32)
tf.Tensor(425508900000.0, shape=(), dtype=float32)
tf.Tensor(425506100000.0, shape=(), dtype=float32)
tf.Tensor(425503420000.0, shape=(), dtype=float32)
tf.Tensor(425500740000.0, shape=(), dtype=float32)
tf.Tensor(425498020000.0, shape=(), dtype=float32)
tf.Tensor(425495270000.0, shape=(), dtype=float32)
tf.Tensor(425492550000.0, shape=(), dtype=float32)
tf.Tensor(425489920000.0, shape=(), dtype=float32)
tf.Tensor(425487070000.0, shape=(), dtype=float32)
tf.Tensor(425484350000.0, shape=(), dtype=float32)
tf.Tensor(425481670000.0, shape=(), dtype=float32)
tf.Tensor(425478980000.0, shape=(), dtype=float32)
tf.Tensor(425476260000.0, shape=(), dtype=float32)
tf.Tensor(425473570000.0, shape=(), dtype=float32)
tf.Tensor(425470820000.0, shape=(), dtype=float32)
tf.Tensor(425468100000.0, shape=(), dtype=float32)
tf.Tensor(425465380000.0, shape=(), dtype=float32)
tf.Tensor(425462630000.0, shape=(), dtype=float32)
tf.Tensor(425459900000.0, shape=(), dtype=float32)
tf.Tensor(425457220000.0, shape=(), dtype=float32)
tf.Tensor(425454530000.0, shape=(), dtype=float32)
tf.Tensor(425451800000.0, shape=(), dtype=float32)
tf.Tensor(425449060000.0, shape=(), dtype=float32)
tf.Tensor(425446380000.0, shape=(), dtype=float32)
tf.Tensor(425443660000.0, shape=(), dtype=float32)
tf.Tensor(425440940000.0, shape=(), dtype=float32)
tf.Tensor(425438250000.0, shape=(), dtype=float32)
tf.Tensor(425435560000.0, shape=(), dtype=float32)
tf.Tensor(425432780000.0, shape=(), dtype=float32)
tf.Tensor(425430100000.0, shape=(), dtype=float32)
tf.Tensor(425427300000.0, shape=(), dtype=float32)
tf.Tensor(425424620000.0, shape=(), dtype=float32)
tf.Tensor(425421830000.0, shape=(), dtype=float32)
tf.Tensor(425419180000.0, shape=(), dtype=float32)
tf.Tensor(425416500000.0, shape=(), dtype=float32)
tf.Tensor(425413800000.0, shape=(), dtype=float32)
tf.Tensor(425411120000.0, shape=(), dtype=float32)
tf.Tensor(425408430000.0, shape=(), dtype=float32)
tf.Tensor(425405640000.0, shape=(), dtype=float32)
tf.Tensor(425402930000.0, shape=(), dtype=float32)
tf.Tensor(425400240000.0, shape=(), dtype=float32)
tf.Tensor(425397520000.0, shape=(), dtype=float32)
tf.Tensor(425394770000.0, shape=(), dtype=float32)
tf.Tensor(425392050000.0, shape=(), dtype=float32)
tf.Tensor(425389360000.0, shape=(), dtype=float32)
tf.Tensor(425386570000.0, shape=(), dtype=float32)
tf.Tensor(425383900000.0, shape=(), dtype=float32)
tf.Tensor(425381200000.0, shape=(), dtype=float32)
tf.Tensor(425378480000.0, shape=(), dtype=float32)
tf.Tensor(425375800000.0, shape=(), dtype=float32)
tf.Tensor(425373070000.0, shape=(), dtype=float32)
tf.Tensor(425370320000.0, shape=(), dtype=float32)
tf.Tensor(425367540000.0, shape=(), dtype=float32)
tf.Tensor(425364900000.0, shape=(), dtype=float32)
tf.Tensor(425362130000.0, shape=(), dtype=float32)
tf.Tensor(425359440000.0, shape=(), dtype=float32)
tf.Tensor(425356720000.0, shape=(), dtype=float32)
tf.Tensor(425354070000.0, shape=(), dtype=float32)
tf.Tensor(425351300000.0, shape=(), dtype=float32)
tf.Tensor(425348630000.0, shape=(), dtype=float32)
tf.Tensor(425345940000.0, shape=(), dtype=float32)
tf.Tensor(425343160000.0, shape=(), dtype=float32)
tf.Tensor(425340470000.0, shape=(), dtype=float32)
tf.Tensor(425337780000.0, shape=(), dtype=float32)
tf.Tensor(425335100000.0, shape=(), dtype=float32)
tf.Tensor(425332300000.0, shape=(), dtype=float32)
tf.Tensor(425329600000.0, shape=(), dtype=float32)
tf.Tensor(425326900000.0, shape=(), dtype=float32)
tf.Tensor(425324180000.0, shape=(), dtype=float32)
tf.Tensor(425321500000.0, shape=(), dtype=float32)
tf.Tensor(425318700000.0, shape=(), dtype=float32)
tf.Tensor(425316000000.0, shape=(), dtype=float32)
tf.Tensor(425313240000.0, shape=(), dtype=float32)
tf.Tensor(425310550000.0, shape=(), dtype=float32)
tf.Tensor(425307870000.0, shape=(), dtype=float32)
tf.Tensor(425305200000.0, shape=(), dtype=float32)
tf.Tensor(425302460000.0, shape=(), dtype=float32)
tf.Tensor(425299740000.0, shape=(), dtype=float32)
tf.Tensor(425297050000.0, shape=(), dtype=float32)
tf.Tensor(425294330000.0, shape=(), dtype=float32)
tf.Tensor(425291580000.0, shape=(), dtype=float32)
tf.Tensor(425288860000.0, shape=(), dtype=float32)
tf.Tensor(425286100000.0, shape=(), dtype=float32)
tf.Tensor(425283400000.0, shape=(), dtype=float32)
tf.Tensor(425280700000.0, shape=(), dtype=float32)
tf.Tensor(425278000000.0, shape=(), dtype=float32)
tf.Tensor(425275330000.0, shape=(), dtype=float32)
tf.Tensor(425272600000.0, shape=(), dtype=float32)
tf.Tensor(425269920000.0, shape=(), dtype=float32)
tf.Tensor(425267230000.0, shape=(), dtype=float32)
tf.Tensor(425264450000.0, shape=(), dtype=float32)
tf.Tensor(425261660000.0, shape=(), dtype=float32)
tf.Tensor(425259000000.0, shape=(), dtype=float32)
tf.Tensor(425256320000.0, shape=(), dtype=float32)
tf.Tensor(425253570000.0, shape=(), dtype=float32)
tf.Tensor(425250880000.0, shape=(), dtype=float32)
tf.Tensor(425248130000.0, shape=(), dtype=float32)
tf.Tensor(425245380000.0, shape=(), dtype=float32)
tf.Tensor(425242700000.0, shape=(), dtype=float32)
tf.Tensor(425240000000.0, shape=(), dtype=float32)
tf.Tensor(425237280000.0, shape=(), dtype=float32)
tf.Tensor(425234560000.0, shape=(), dtype=float32)
tf.Tensor(425231800000.0, shape=(), dtype=float32)
tf.Tensor(425229100000.0, shape=(), dtype=float32)
tf.Tensor(425226400000.0, shape=(), dtype=float32)
tf.Tensor(425223700000.0, shape=(), dtype=float32)
tf.Tensor(425220930000.0, shape=(), dtype=float32)
tf.Tensor(425218240000.0, shape=(), dtype=float32)
tf.Tensor(425215560000.0, shape=(), dtype=float32)
tf.Tensor(425212900000.0, shape=(), dtype=float32)
tf.Tensor(425210220000.0, shape=(), dtype=float32)
tf.Tensor(425207430000.0, shape=(), dtype=float32)
tf.Tensor(425204740000.0, shape=(), dtype=float32)
tf.Tensor(425202060000.0, shape=(), dtype=float32)
tf.Tensor(425199270000.0, shape=(), dtype=float32)
tf.Tensor(425196600000.0, shape=(), dtype=float32)
tf.Tensor(425193870000.0, shape=(), dtype=float32)
tf.Tensor(425191150000.0, shape=(), dtype=float32)
tf.Tensor(425188400000.0, shape=(), dtype=float32)
tf.Tensor(425185670000.0, shape=(), dtype=float32)
tf.Tensor(425183000000.0, shape=(), dtype=float32)
tf.Tensor(425180300000.0, shape=(), dtype=float32)
tf.Tensor(425177600000.0, shape=(), dtype=float32)
tf.Tensor(425174830000.0, shape=(), dtype=float32)
tf.Tensor(425172140000.0, shape=(), dtype=float32)
tf.Tensor(425169500000.0, shape=(), dtype=float32)
tf.Tensor(425166730000.0, shape=(), dtype=float32)
tf.Tensor(425163950000.0, shape=(), dtype=float32)
tf.Tensor(425161260000.0, shape=(), dtype=float32)
tf.Tensor(425158540000.0, shape=(), dtype=float32)
tf.Tensor(425155820000.0, shape=(), dtype=float32)
tf.Tensor(425153070000.0, shape=(), dtype=float32)
tf.Tensor(425150380000.0, shape=(), dtype=float32)
tf.Tensor(425147700000.0, shape=(), dtype=float32)
tf.Tensor(425145040000.0, shape=(), dtype=float32)
tf.Tensor(425142350000.0, shape=(), dtype=float32)
tf.Tensor(425139570000.0, shape=(), dtype=float32)
tf.Tensor(425136800000.0, shape=(), dtype=float32)
tf.Tensor(425134100000.0, shape=(), dtype=float32)
tf.Tensor(425131300000.0, shape=(), dtype=float32)
tf.Tensor(425128700000.0, shape=(), dtype=float32)
tf.Tensor(425125940000.0, shape=(), dtype=float32)
tf.Tensor(425123300000.0, shape=(), dtype=float32)
tf.Tensor(425120600000.0, shape=(), dtype=float32)
tf.Tensor(425117900000.0, shape=(), dtype=float32)
tf.Tensor(425115220000.0, shape=(), dtype=float32)
tf.Tensor(425112440000.0, shape=(), dtype=float32)
tf.Tensor(425109750000.0, shape=(), dtype=float32)
tf.Tensor(425106960000.0, shape=(), dtype=float32)
tf.Tensor(425104240000.0, shape=(), dtype=float32)
tf.Tensor(425101500000.0, shape=(), dtype=float32)
tf.Tensor(425098870000.0, shape=(), dtype=float32)
tf.Tensor(425096150000.0, shape=(), dtype=float32)
tf.Tensor(425093460000.0, shape=(), dtype=float32)
tf.Tensor(425090780000.0, shape=(), dtype=float32)
tf.Tensor(425088000000.0, shape=(), dtype=float32)
tf.Tensor(425085300000.0, shape=(), dtype=float32)
tf.Tensor(425082520000.0, shape=(), dtype=float32)
tf.Tensor(425079900000.0, shape=(), dtype=float32)
tf.Tensor(425077100000.0, shape=(), dtype=float32)
tf.Tensor(425074400000.0, shape=(), dtype=float32)
tf.Tensor(425071700000.0, shape=(), dtype=float32)
tf.Tensor(425069020000.0, shape=(), dtype=float32)
tf.Tensor(425066300000.0, shape=(), dtype=float32)
tf.Tensor(425063550000.0, shape=(), dtype=float32)
tf.Tensor(425060860000.0, shape=(), dtype=float32)
tf.Tensor(425058140000.0, shape=(), dtype=float32)
tf.Tensor(425055520000.0, shape=(), dtype=float32)
tf.Tensor(425052830000.0, shape=(), dtype=float32)
tf.Tensor(425050050000.0, shape=(), dtype=float32)
tf.Tensor(425047300000.0, shape=(), dtype=float32)
tf.Tensor(425044570000.0, shape=(), dtype=float32)
tf.Tensor(425041850000.0, shape=(), dtype=float32)
tf.Tensor(425039100000.0, shape=(), dtype=float32)
tf.Tensor(425036400000.0, shape=(), dtype=float32)
tf.Tensor(425033700000.0, shape=(), dtype=float32)
tf.Tensor(425030980000.0, shape=(), dtype=float32)
tf.Tensor(425028300000.0, shape=(), dtype=float32)
tf.Tensor(425025600000.0, shape=(), dtype=float32)
tf.Tensor(425022850000.0, shape=(), dtype=float32)
tf.Tensor(425020130000.0, shape=(), dtype=float32)
tf.Tensor(425017400000.0, shape=(), dtype=float32)
tf.Tensor(425014660000.0, shape=(), dtype=float32)
tf.Tensor(425011970000.0, shape=(), dtype=float32)
tf.Tensor(425009250000.0, shape=(), dtype=float32)
tf.Tensor(425006630000.0, shape=(), dtype=float32)
tf.Tensor(425003840000.0, shape=(), dtype=float32)
tf.Tensor(425001160000.0, shape=(), dtype=float32)
tf.Tensor(424998370000.0, shape=(), dtype=float32)
tf.Tensor(424995750000.0, shape=(), dtype=float32)
tf.Tensor(424993000000.0, shape=(), dtype=float32)
tf.Tensor(424990280000.0, shape=(), dtype=float32)
tf.Tensor(424987660000.0, shape=(), dtype=float32)
tf.Tensor(424984870000.0, shape=(), dtype=float32)
tf.Tensor(424982200000.0, shape=(), dtype=float32)
tf.Tensor(424979500000.0, shape=(), dtype=float32)
tf.Tensor(424976800000.0, shape=(), dtype=float32)
tf.Tensor(424974020000.0, shape=(), dtype=float32)
tf.Tensor(424971300000.0, shape=(), dtype=float32)
tf.Tensor(424968550000.0, shape=(), dtype=float32)
tf.Tensor(424965870000.0, shape=(), dtype=float32)
tf.Tensor(424963200000.0, shape=(), dtype=float32)
tf.Tensor(424960430000.0, shape=(), dtype=float32)
tf.Tensor(424957740000.0, shape=(), dtype=float32)
tf.Tensor(424955050000.0, shape=(), dtype=float32)
tf.Tensor(424952370000.0, shape=(), dtype=float32)
tf.Tensor(424949650000.0, shape=(), dtype=float32)
tf.Tensor(424946900000.0, shape=(), dtype=float32)
tf.Tensor(424944200000.0, shape=(), dtype=float32)
tf.Tensor(424941450000.0, shape=(), dtype=float32)
tf.Tensor(424938770000.0, shape=(), dtype=float32)
tf.Tensor(424936000000.0, shape=(), dtype=float32)
tf.Tensor(424933300000.0, shape=(), dtype=float32)
tf.Tensor(424930600000.0, shape=(), dtype=float32)
tf.Tensor(424927920000.0, shape=(), dtype=float32)
tf.Tensor(424925230000.0, shape=(), dtype=float32)
tf.Tensor(424922500000.0, shape=(), dtype=float32)
tf.Tensor(424919760000.0, shape=(), dtype=float32)
tf.Tensor(424917040000.0, shape=(), dtype=float32)
tf.Tensor(424914350000.0, shape=(), dtype=float32)
tf.Tensor(424911630000.0, shape=(), dtype=float32)
tf.Tensor(424908880000.0, shape=(), dtype=float32)
tf.Tensor(424906160000.0, shape=(), dtype=float32)
tf.Tensor(424903480000.0, shape=(), dtype=float32)
tf.Tensor(424900700000.0, shape=(), dtype=float32)
tf.Tensor(424898000000.0, shape=(), dtype=float32)
tf.Tensor(424895350000.0, shape=(), dtype=float32)
tf.Tensor(424892600000.0, shape=(), dtype=float32)
tf.Tensor(424889880000.0, shape=(), dtype=float32)
tf.Tensor(424887200000.0, shape=(), dtype=float32)
tf.Tensor(424884500000.0, shape=(), dtype=float32)
tf.Tensor(424881800000.0, shape=(), dtype=float32)
tf.Tensor(424879100000.0, shape=(), dtype=float32)
tf.Tensor(424876340000.0, shape=(), dtype=float32)
tf.Tensor(424873620000.0, shape=(), dtype=float32)
tf.Tensor(424870900000.0, shape=(), dtype=float32)
tf.Tensor(424868220000.0, shape=(), dtype=float32)
tf.Tensor(424865460000.0, shape=(), dtype=float32)
tf.Tensor(424862740000.0, shape=(), dtype=float32)
tf.Tensor(424860060000.0, shape=(), dtype=float32)
tf.Tensor(424857370000.0, shape=(), dtype=float32)
tf.Tensor(424854650000.0, shape=(), dtype=float32)
tf.Tensor(424851900000.0, shape=(), dtype=float32)
tf.Tensor(424849180000.0, shape=(), dtype=float32)
tf.Tensor(424846460000.0, shape=(), dtype=float32)
tf.Tensor(424843770000.0, shape=(), dtype=float32)
tf.Tensor(424841100000.0, shape=(), dtype=float32)
tf.Tensor(424838400000.0, shape=(), dtype=float32)
tf.Tensor(424835700000.0, shape=(), dtype=float32)
tf.Tensor(424832930000.0, shape=(), dtype=float32)
tf.Tensor(424830240000.0, shape=(), dtype=float32)
tf.Tensor(424827600000.0, shape=(), dtype=float32)
tf.Tensor(424824830000.0, shape=(), dtype=float32)
tf.Tensor(424822050000.0, shape=(), dtype=float32)
tf.Tensor(424819330000.0, shape=(), dtype=float32)
tf.Tensor(424816640000.0, shape=(), dtype=float32)
tf.Tensor(424813950000.0, shape=(), dtype=float32)
tf.Tensor(424811270000.0, shape=(), dtype=float32)
tf.Tensor(424808480000.0, shape=(), dtype=float32)
tf.Tensor(424805800000.0, shape=(), dtype=float32)
tf.Tensor(424803040000.0, shape=(), dtype=float32)
tf.Tensor(424800350000.0, shape=(), dtype=float32)
tf.Tensor(424797670000.0, shape=(), dtype=float32)
tf.Tensor(424794900000.0, shape=(), dtype=float32)
tf.Tensor(424792260000.0, shape=(), dtype=float32)
tf.Tensor(424789500000.0, shape=(), dtype=float32)
tf.Tensor(424786820000.0, shape=(), dtype=float32)
tf.Tensor(424784130000.0, shape=(), dtype=float32)
tf.Tensor(424781480000.0, shape=(), dtype=float32)
tf.Tensor(424778730000.0, shape=(), dtype=float32)
tf.Tensor(424775900000.0, shape=(), dtype=float32)
tf.Tensor(424773260000.0, shape=(), dtype=float32)
tf.Tensor(424770540000.0, shape=(), dtype=float32)
tf.Tensor(424767850000.0, shape=(), dtype=float32)
tf.Tensor(424765160000.0, shape=(), dtype=float32)
tf.Tensor(424762440000.0, shape=(), dtype=float32)
tf.Tensor(424759720000.0, shape=(), dtype=float32)
tf.Tensor(424757040000.0, shape=(), dtype=float32)
tf.Tensor(424754350000.0, shape=(), dtype=float32)
tf.Tensor(424751560000.0, shape=(), dtype=float32)
tf.Tensor(424748880000.0, shape=(), dtype=float32)
tf.Tensor(424746100000.0, shape=(), dtype=float32)
tf.Tensor(424743370000.0, shape=(), dtype=float32)
tf.Tensor(424740720000.0, shape=(), dtype=float32)
tf.Tensor(424737930000.0, shape=(), dtype=float32)
tf.Tensor(424735380000.0, shape=(), dtype=float32)
tf.Tensor(424732600000.0, shape=(), dtype=float32)
tf.Tensor(424729900000.0, shape=(), dtype=float32)
tf.Tensor(424727200000.0, shape=(), dtype=float32)
tf.Tensor(424724430000.0, shape=(), dtype=float32)
tf.Tensor(424721740000.0, shape=(), dtype=float32)
tf.Tensor(424719060000.0, shape=(), dtype=float32)
tf.Tensor(424716270000.0, shape=(), dtype=float32)
tf.Tensor(424713550000.0, shape=(), dtype=float32)
tf.Tensor(424710930000.0, shape=(), dtype=float32)
tf.Tensor(424708150000.0, shape=(), dtype=float32)
tf.Tensor(424705500000.0, shape=(), dtype=float32)
tf.Tensor(424702770000.0, shape=(), dtype=float32)
tf.Tensor(424700100000.0, shape=(), dtype=float32)
tf.Tensor(424697400000.0, shape=(), dtype=float32)
tf.Tensor(424694600000.0, shape=(), dtype=float32)
tf.Tensor(424691960000.0, shape=(), dtype=float32)
tf.Tensor(424689270000.0, shape=(), dtype=float32)
tf.Tensor(424686520000.0, shape=(), dtype=float32)
tf.Tensor(424683800000.0, shape=(), dtype=float32)
tf.Tensor(424681050000.0, shape=(), dtype=float32)
tf.Tensor(424678330000.0, shape=(), dtype=float32)
tf.Tensor(424675600000.0, shape=(), dtype=float32)
tf.Tensor(424672850000.0, shape=(), dtype=float32)
tf.Tensor(424670130000.0, shape=(), dtype=float32)
tf.Tensor(424667450000.0, shape=(), dtype=float32)
tf.Tensor(424664830000.0, shape=(), dtype=float32)
tf.Tensor(424662140000.0, shape=(), dtype=float32)
tf.Tensor(424659350000.0, shape=(), dtype=float32)
tf.Tensor(424656570000.0, shape=(), dtype=float32)
tf.Tensor(424653880000.0, shape=(), dtype=float32)
tf.Tensor(424651200000.0, shape=(), dtype=float32)
tf.Tensor(424648500000.0, shape=(), dtype=float32)
tf.Tensor(424645800000.0, shape=(), dtype=float32)
tf.Tensor(424643070000.0, shape=(), dtype=float32)
tf.Tensor(424640380000.0, shape=(), dtype=float32)
tf.Tensor(424637700000.0, shape=(), dtype=float32)
tf.Tensor(424634970000.0, shape=(), dtype=float32)
tf.Tensor(424632220000.0, shape=(), dtype=float32)
tf.Tensor(424629500000.0, shape=(), dtype=float32)
tf.Tensor(424626800000.0, shape=(), dtype=float32)
tf.Tensor(424624130000.0, shape=(), dtype=float32)
tf.Tensor(424621500000.0, shape=(), dtype=float32)
tf.Tensor(424618720000.0, shape=(), dtype=float32)
tf.Tensor(424615970000.0, shape=(), dtype=float32)
tf.Tensor(424613250000.0, shape=(), dtype=float32)
tf.Tensor(424610530000.0, shape=(), dtype=float32)
tf.Tensor(424607840000.0, shape=(), dtype=float32)
tf.Tensor(424605100000.0, shape=(), dtype=float32)
tf.Tensor(424602400000.0, shape=(), dtype=float32)
tf.Tensor(424599750000.0, shape=(), dtype=float32)
tf.Tensor(424597000000.0, shape=(), dtype=float32)
tf.Tensor(424594280000.0, shape=(), dtype=float32)
tf.Tensor(424591600000.0, shape=(), dtype=float32)
tf.Tensor(424588870000.0, shape=(), dtype=float32)
tf.Tensor(424586180000.0, shape=(), dtype=float32)
tf.Tensor(424583430000.0, shape=(), dtype=float32)
tf.Tensor(424580740000.0, shape=(), dtype=float32)
tf.Tensor(424578000000.0, shape=(), dtype=float32)
tf.Tensor(424575300000.0, shape=(), dtype=float32)
tf.Tensor(424572620000.0, shape=(), dtype=float32)
tf.Tensor(424569860000.0, shape=(), dtype=float32)
tf.Tensor(424567240000.0, shape=(), dtype=float32)
tf.Tensor(424564520000.0, shape=(), dtype=float32)
tf.Tensor(424561770000.0, shape=(), dtype=float32)
tf.Tensor(424559050000.0, shape=(), dtype=float32)
tf.Tensor(424556430000.0, shape=(), dtype=float32)
tf.Tensor(424553640000.0, shape=(), dtype=float32)
tf.Tensor(424550960000.0, shape=(), dtype=float32)
tf.Tensor(424548200000.0, shape=(), dtype=float32)
tf.Tensor(424545500000.0, shape=(), dtype=float32)
tf.Tensor(424542800000.0, shape=(), dtype=float32)
tf.Tensor(424540000000.0, shape=(), dtype=float32)
tf.Tensor(424537400000.0, shape=(), dtype=float32)
tf.Tensor(424534640000.0, shape=(), dtype=float32)
tf.Tensor(424531900000.0, shape=(), dtype=float32)
tf.Tensor(424529200000.0, shape=(), dtype=float32)
tf.Tensor(424526500000.0, shape=(), dtype=float32)
tf.Tensor(424523830000.0, shape=(), dtype=float32)
tf.Tensor(424521140000.0, shape=(), dtype=float32)
tf.Tensor(424518420000.0, shape=(), dtype=float32)
tf.Tensor(424515630000.0, shape=(), dtype=float32)
tf.Tensor(424512950000.0, shape=(), dtype=float32)
tf.Tensor(424510320000.0, shape=(), dtype=float32)
tf.Tensor(424507640000.0, shape=(), dtype=float32)
tf.Tensor(424504850000.0, shape=(), dtype=float32)
tf.Tensor(424502100000.0, shape=(), dtype=float32)
tf.Tensor(424499380000.0, shape=(), dtype=float32)
tf.Tensor(424496700000.0, shape=(), dtype=float32)
tf.Tensor(424494000000.0, shape=(), dtype=float32)
tf.Tensor(424491300000.0, shape=(), dtype=float32)
tf.Tensor(424488570000.0, shape=(), dtype=float32)
tf.Tensor(424485800000.0, shape=(), dtype=float32)
tf.Tensor(424483100000.0, shape=(), dtype=float32)
tf.Tensor(424480400000.0, shape=(), dtype=float32)
tf.Tensor(424477720000.0, shape=(), dtype=float32)
tf.Tensor(424475000000.0, shape=(), dtype=float32)
tf.Tensor(424472250000.0, shape=(), dtype=float32)
tf.Tensor(424469560000.0, shape=(), dtype=float32)
tf.Tensor(424466870000.0, shape=(), dtype=float32)
tf.Tensor(424464150000.0, shape=(), dtype=float32)
tf.Tensor(424461430000.0, shape=(), dtype=float32)
tf.Tensor(424458680000.0, shape=(), dtype=float32)
tf.Tensor(424456060000.0, shape=(), dtype=float32)
tf.Tensor(424453280000.0, shape=(), dtype=float32)
tf.Tensor(424450560000.0, shape=(), dtype=float32)
tf.Tensor(424447870000.0, shape=(), dtype=float32)
tf.Tensor(424445120000.0, shape=(), dtype=float32)
tf.Tensor(424442560000.0, shape=(), dtype=float32)
tf.Tensor(424439780000.0, shape=(), dtype=float32)
tf.Tensor(424437100000.0, shape=(), dtype=float32)
tf.Tensor(424434340000.0, shape=(), dtype=float32)
tf.Tensor(424431680000.0, shape=(), dtype=float32)
tf.Tensor(424428930000.0, shape=(), dtype=float32)
tf.Tensor(424426240000.0, shape=(), dtype=float32)
tf.Tensor(424423460000.0, shape=(), dtype=float32)
tf.Tensor(424420800000.0, shape=(), dtype=float32)
tf.Tensor(424418020000.0, shape=(), dtype=float32)
tf.Tensor(424415330000.0, shape=(), dtype=float32)
tf.Tensor(424412640000.0, shape=(), dtype=float32)
tf.Tensor(424409960000.0, shape=(), dtype=float32)
tf.Tensor(424407330000.0, shape=(), dtype=float32)
tf.Tensor(424404500000.0, shape=(), dtype=float32)
tf.Tensor(424401800000.0, shape=(), dtype=float32)
tf.Tensor(424399100000.0, shape=(), dtype=float32)
tf.Tensor(424396400000.0, shape=(), dtype=float32)
tf.Tensor(424393700000.0, shape=(), dtype=float32)
tf.Tensor(424391020000.0, shape=(), dtype=float32)
tf.Tensor(424388300000.0, shape=(), dtype=float32)
tf.Tensor(424385500000.0, shape=(), dtype=float32)
tf.Tensor(424382820000.0, shape=(), dtype=float32)
tf.Tensor(424380040000.0, shape=(), dtype=float32)
tf.Tensor(424377350000.0, shape=(), dtype=float32)
tf.Tensor(424374700000.0, shape=(), dtype=float32)
tf.Tensor(424372000000.0, shape=(), dtype=float32)
tf.Tensor(424369230000.0, shape=(), dtype=float32)
tf.Tensor(424366540000.0, shape=(), dtype=float32)
tf.Tensor(424363850000.0, shape=(), dtype=float32)
tf.Tensor(424361130000.0, shape=(), dtype=float32)
tf.Tensor(424358450000.0, shape=(), dtype=float32)
tf.Tensor(424355760000.0, shape=(), dtype=float32)
tf.Tensor(424353070000.0, shape=(), dtype=float32)
tf.Tensor(424350300000.0, shape=(), dtype=float32)
tf.Tensor(424347600000.0, shape=(), dtype=float32)
tf.Tensor(424344880000.0, shape=(), dtype=float32)
tf.Tensor(424342200000.0, shape=(), dtype=float32)
tf.Tensor(424339500000.0, shape=(), dtype=float32)
tf.Tensor(424336820000.0, shape=(), dtype=float32)
tf.Tensor(424334100000.0, shape=(), dtype=float32)
tf.Tensor(424331350000.0, shape=(), dtype=float32)
tf.Tensor(424328630000.0, shape=(), dtype=float32)
tf.Tensor(424325900000.0, shape=(), dtype=float32)
tf.Tensor(424323220000.0, shape=(), dtype=float32)
tf.Tensor(424320530000.0, shape=(), dtype=float32)
tf.Tensor(424317800000.0, shape=(), dtype=float32)
tf.Tensor(424315130000.0, shape=(), dtype=float32)
tf.Tensor(424312370000.0, shape=(), dtype=float32)
tf.Tensor(424309650000.0, shape=(), dtype=float32)
tf.Tensor(424306970000.0, shape=(), dtype=float32)
tf.Tensor(424304250000.0, shape=(), dtype=float32)
tf.Tensor(424301560000.0, shape=(), dtype=float32)
tf.Tensor(424298870000.0, shape=(), dtype=float32)
tf.Tensor(424296200000.0, shape=(), dtype=float32)
tf.Tensor(424293400000.0, shape=(), dtype=float32)
tf.Tensor(424290700000.0, shape=(), dtype=float32)
tf.Tensor(424288030000.0, shape=(), dtype=float32)
tf.Tensor(424285300000.0, shape=(), dtype=float32)
tf.Tensor(424282600000.0, shape=(), dtype=float32)
tf.Tensor(424279900000.0, shape=(), dtype=float32)
tf.Tensor(424277150000.0, shape=(), dtype=float32)
tf.Tensor(424274430000.0, shape=(), dtype=float32)
tf.Tensor(424271740000.0, shape=(), dtype=float32)
tf.Tensor(424269050000.0, shape=(), dtype=float32)
tf.Tensor(424266270000.0, shape=(), dtype=float32)
tf.Tensor(424263650000.0, shape=(), dtype=float32)
tf.Tensor(424260830000.0, shape=(), dtype=float32)
tf.Tensor(424258240000.0, shape=(), dtype=float32)
tf.Tensor(424255460000.0, shape=(), dtype=float32)
tf.Tensor(424252700000.0, shape=(), dtype=float32)
tf.Tensor(424249980000.0, shape=(), dtype=float32)
tf.Tensor(424247300000.0, shape=(), dtype=float32)
tf.Tensor(424244600000.0, shape=(), dtype=float32)
tf.Tensor(424241920000.0, shape=(), dtype=float32)
tf.Tensor(424239200000.0, shape=(), dtype=float32)
tf.Tensor(424236500000.0, shape=(), dtype=float32)
tf.Tensor(424233830000.0, shape=(), dtype=float32)
tf.Tensor(424231100000.0, shape=(), dtype=float32)
tf.Tensor(424228400000.0, shape=(), dtype=float32)
tf.Tensor(424225730000.0, shape=(), dtype=float32)
tf.Tensor(424222950000.0, shape=(), dtype=float32)
tf.Tensor(424220260000.0, shape=(), dtype=float32)
tf.Tensor(424217540000.0, shape=(), dtype=float32)
tf.Tensor(424214860000.0, shape=(), dtype=float32)
tf.Tensor(424212140000.0, shape=(), dtype=float32)
tf.Tensor(424209450000.0, shape=(), dtype=float32)
tf.Tensor(424206760000.0, shape=(), dtype=float32)
tf.Tensor(424204040000.0, shape=(), dtype=float32)
tf.Tensor(424201300000.0, shape=(), dtype=float32)
tf.Tensor(424198500000.0, shape=(), dtype=float32)
tf.Tensor(424195850000.0, shape=(), dtype=float32)
tf.Tensor(424193100000.0, shape=(), dtype=float32)
tf.Tensor(424190480000.0, shape=(), dtype=float32)
tf.Tensor(424187700000.0, shape=(), dtype=float32)
tf.Tensor(424185000000.0, shape=(), dtype=float32)
tf.Tensor(424182320000.0, shape=(), dtype=float32)
tf.Tensor(424179630000.0, shape=(), dtype=float32)
tf.Tensor(424176900000.0, shape=(), dtype=float32)
tf.Tensor(424174220000.0, shape=(), dtype=float32)
tf.Tensor(424171500000.0, shape=(), dtype=float32)
tf.Tensor(424168850000.0, shape=(), dtype=float32)
tf.Tensor(424166130000.0, shape=(), dtype=float32)
tf.Tensor(424163380000.0, shape=(), dtype=float32)
tf.Tensor(424160660000.0, shape=(), dtype=float32)
tf.Tensor(424157970000.0, shape=(), dtype=float32)
tf.Tensor(424155280000.0, shape=(), dtype=float32)
tf.Tensor(424152560000.0, shape=(), dtype=float32)
tf.Tensor(424149840000.0, shape=(), dtype=float32)
tf.Tensor(424147160000.0, shape=(), dtype=float32)
tf.Tensor(424144470000.0, shape=(), dtype=float32)
tf.Tensor(424141700000.0, shape=(), dtype=float32)
tf.Tensor(424139000000.0, shape=(), dtype=float32)
tf.Tensor(424136300000.0, shape=(), dtype=float32)
tf.Tensor(424133530000.0, shape=(), dtype=float32)
tf.Tensor(424130840000.0, shape=(), dtype=float32)
tf.Tensor(424128100000.0, shape=(), dtype=float32)
tf.Tensor(424125430000.0, shape=(), dtype=float32)
tf.Tensor(424122700000.0, shape=(), dtype=float32)
tf.Tensor(424120030000.0, shape=(), dtype=float32)
tf.Tensor(424117300000.0, shape=(), dtype=float32)
tf.Tensor(424114650000.0, shape=(), dtype=float32)
tf.Tensor(424111870000.0, shape=(), dtype=float32)
tf.Tensor(424109180000.0, shape=(), dtype=float32)
tf.Tensor(424106520000.0, shape=(), dtype=float32)
tf.Tensor(424103770000.0, shape=(), dtype=float32)
tf.Tensor(424101050000.0, shape=(), dtype=float32)
tf.Tensor(424098370000.0, shape=(), dtype=float32)
tf.Tensor(424095600000.0, shape=(), dtype=float32)
tf.Tensor(424092900000.0, shape=(), dtype=float32)
tf.Tensor(424090200000.0, shape=(), dtype=float32)
tf.Tensor(424087620000.0, shape=(), dtype=float32)
tf.Tensor(424084830000.0, shape=(), dtype=float32)
tf.Tensor(424082100000.0, shape=(), dtype=float32)
tf.Tensor(424079400000.0, shape=(), dtype=float32)
tf.Tensor(424076640000.0, shape=(), dtype=float32)
tf.Tensor(424073920000.0, shape=(), dtype=float32)
tf.Tensor(424071230000.0, shape=(), dtype=float32)
tf.Tensor(424068550000.0, shape=(), dtype=float32)
tf.Tensor(424065830000.0, shape=(), dtype=float32)
tf.Tensor(424063140000.0, shape=(), dtype=float32)
tf.Tensor(424060420000.0, shape=(), dtype=float32)
tf.Tensor(424057730000.0, shape=(), dtype=float32)
tf.Tensor(424055050000.0, shape=(), dtype=float32)
tf.Tensor(424052300000.0, shape=(), dtype=float32)
tf.Tensor(424049570000.0, shape=(), dtype=float32)
tf.Tensor(424046900000.0, shape=(), dtype=float32)
tf.Tensor(424044200000.0, shape=(), dtype=float32)
tf.Tensor(424041400000.0, shape=(), dtype=float32)
tf.Tensor(424038760000.0, shape=(), dtype=float32)
tf.Tensor(424036070000.0, shape=(), dtype=float32)
tf.Tensor(424033320000.0, shape=(), dtype=float32)
tf.Tensor(424030600000.0, shape=(), dtype=float32)
tf.Tensor(424027900000.0, shape=(), dtype=float32)
tf.Tensor(424025230000.0, shape=(), dtype=float32)
tf.Tensor(424022540000.0, shape=(), dtype=float32)
tf.Tensor(424019850000.0, shape=(), dtype=float32)
tf.Tensor(424017070000.0, shape=(), dtype=float32)
tf.Tensor(424014400000.0, shape=(), dtype=float32)
tf.Tensor(424011730000.0, shape=(), dtype=float32)
tf.Tensor(424008940000.0, shape=(), dtype=float32)
tf.Tensor(424006200000.0, shape=(), dtype=float32)
tf.Tensor(424003530000.0, shape=(), dtype=float32)
tf.Tensor(424000780000.0, shape=(), dtype=float32)
tf.Tensor(423998060000.0, shape=(), dtype=float32)
tf.Tensor(423995400000.0, shape=(), dtype=float32)
tf.Tensor(423992750000.0, shape=(), dtype=float32)
tf.Tensor(423990000000.0, shape=(), dtype=float32)
tf.Tensor(423987380000.0, shape=(), dtype=float32)
tf.Tensor(423984700000.0, shape=(), dtype=float32)
tf.Tensor(423981900000.0, shape=(), dtype=float32)
tf.Tensor(423979220000.0, shape=(), dtype=float32)
tf.Tensor(423976440000.0, shape=(), dtype=float32)
tf.Tensor(423973850000.0, shape=(), dtype=float32)
tf.Tensor(423971100000.0, shape=(), dtype=float32)
tf.Tensor(423968400000.0, shape=(), dtype=float32)
tf.Tensor(423965620000.0, shape=(), dtype=float32)
tf.Tensor(423962940000.0, shape=(), dtype=float32)
tf.Tensor(423960200000.0, shape=(), dtype=float32)
tf.Tensor(423957530000.0, shape=(), dtype=float32)
tf.Tensor(423954740000.0, shape=(), dtype=float32)
tf.Tensor(423952100000.0, shape=(), dtype=float32)
tf.Tensor(423949370000.0, shape=(), dtype=float32)
tf.Tensor(423946680000.0, shape=(), dtype=float32)
tf.Tensor(423943960000.0, shape=(), dtype=float32)
tf.Tensor(423941280000.0, shape=(), dtype=float32)
tf.Tensor(423938600000.0, shape=(), dtype=float32)
tf.Tensor(423935900000.0, shape=(), dtype=float32)
tf.Tensor(423933120000.0, shape=(), dtype=float32)
tf.Tensor(423930430000.0, shape=(), dtype=float32)
tf.Tensor(423927780000.0, shape=(), dtype=float32)
tf.Tensor(423925000000.0, shape=(), dtype=float32)
tf.Tensor(423922400000.0, shape=(), dtype=float32)
tf.Tensor(423919600000.0, shape=(), dtype=float32)
tf.Tensor(423916930000.0, shape=(), dtype=float32)
tf.Tensor(423914240000.0, shape=(), dtype=float32)
tf.Tensor(423911460000.0, shape=(), dtype=float32)
tf.Tensor(423908770000.0, shape=(), dtype=float32)
tf.Tensor(423906080000.0, shape=(), dtype=float32)
tf.Tensor(423903430000.0, shape=(), dtype=float32)
tf.Tensor(423900640000.0, shape=(), dtype=float32)
tf.Tensor(423897960000.0, shape=(), dtype=float32)
tf.Tensor(423895200000.0, shape=(), dtype=float32)
tf.Tensor(423892550000.0, shape=(), dtype=float32)
tf.Tensor(423889860000.0, shape=(), dtype=float32)
tf.Tensor(423887100000.0, shape=(), dtype=float32)
tf.Tensor(423884420000.0, shape=(), dtype=float32)
tf.Tensor(423881670000.0, shape=(), dtype=float32)
tf.Tensor(423879000000.0, shape=(), dtype=float32)
tf.Tensor(423876300000.0, shape=(), dtype=float32)
tf.Tensor(423873600000.0, shape=(), dtype=float32)
tf.Tensor(423870900000.0, shape=(), dtype=float32)
tf.Tensor(423868200000.0, shape=(), dtype=float32)
tf.Tensor(423865450000.0, shape=(), dtype=float32)
tf.Tensor(423862760000.0, shape=(), dtype=float32)
tf.Tensor(423860000000.0, shape=(), dtype=float32)
tf.Tensor(423857320000.0, shape=(), dtype=float32)
tf.Tensor(423854640000.0, shape=(), dtype=float32)
tf.Tensor(423851950000.0, shape=(), dtype=float32)
tf.Tensor(423849230000.0, shape=(), dtype=float32)
tf.Tensor(423846480000.0, shape=(), dtype=float32)
tf.Tensor(423843800000.0, shape=(), dtype=float32)
tf.Tensor(423841100000.0, shape=(), dtype=float32)
tf.Tensor(423838320000.0, shape=(), dtype=float32)
tf.Tensor(423835660000.0, shape=(), dtype=float32)
tf.Tensor(423832980000.0, shape=(), dtype=float32)
tf.Tensor(423830300000.0, shape=(), dtype=float32)
tf.Tensor(423827540000.0, shape=(), dtype=float32)
tf.Tensor(423824820000.0, shape=(), dtype=float32)
tf.Tensor(423822130000.0, shape=(), dtype=float32)
tf.Tensor(423819400000.0, shape=(), dtype=float32)
tf.Tensor(423816720000.0, shape=(), dtype=float32)
tf.Tensor(423814000000.0, shape=(), dtype=float32)
tf.Tensor(423811320000.0, shape=(), dtype=float32)
tf.Tensor(423808630000.0, shape=(), dtype=float32)
tf.Tensor(423805880000.0, shape=(), dtype=float32)
tf.Tensor(423803220000.0, shape=(), dtype=float32)
tf.Tensor(423800470000.0, shape=(), dtype=float32)
tf.Tensor(423797800000.0, shape=(), dtype=float32)
tf.Tensor(423795100000.0, shape=(), dtype=float32)
tf.Tensor(423792350000.0, shape=(), dtype=float32)
tf.Tensor(423789660000.0, shape=(), dtype=float32)
tf.Tensor(423786970000.0, shape=(), dtype=float32)
tf.Tensor(423784220000.0, shape=(), dtype=float32)
tf.Tensor(423781600000.0, shape=(), dtype=float32)
tf.Tensor(423778800000.0, shape=(), dtype=float32)
tf.Tensor(423776120000.0, shape=(), dtype=float32)
tf.Tensor(423773440000.0, shape=(), dtype=float32)
tf.Tensor(423770720000.0, shape=(), dtype=float32)
tf.Tensor(423768100000.0, shape=(), dtype=float32)
tf.Tensor(423765200000.0, shape=(), dtype=float32)
tf.Tensor(423762560000.0, shape=(), dtype=float32)
tf.Tensor(423759840000.0, shape=(), dtype=float32)
tf.Tensor(423757220000.0, shape=(), dtype=float32)
tf.Tensor(423754470000.0, shape=(), dtype=float32)
tf.Tensor(423751780000.0, shape=(), dtype=float32)
tf.Tensor(423749060000.0, shape=(), dtype=float32)
tf.Tensor(423746370000.0, shape=(), dtype=float32)
tf.Tensor(423743550000.0, shape=(), dtype=float32)
tf.Tensor(423740900000.0, shape=(), dtype=float32)
tf.Tensor(423738180000.0, shape=(), dtype=float32)
tf.Tensor(423735500000.0, shape=(), dtype=float32)
tf.Tensor(423732870000.0, shape=(), dtype=float32)
tf.Tensor(423730120000.0, shape=(), dtype=float32)
tf.Tensor(423727330000.0, shape=(), dtype=float32)
tf.Tensor(423724680000.0, shape=(), dtype=float32)
tf.Tensor(423722000000.0, shape=(), dtype=float32)
tf.Tensor(423719300000.0, shape=(), dtype=float32)
tf.Tensor(423716620000.0, shape=(), dtype=float32)
tf.Tensor(423713830000.0, shape=(), dtype=float32)
tf.Tensor(423711150000.0, shape=(), dtype=float32)
tf.Tensor(423708430000.0, shape=(), dtype=float32)
tf.Tensor(423705670000.0, shape=(), dtype=float32)
tf.Tensor(423703020000.0, shape=(), dtype=float32)
tf.Tensor(423700330000.0, shape=(), dtype=float32)
tf.Tensor(423697550000.0, shape=(), dtype=float32)
tf.Tensor(423694860000.0, shape=(), dtype=float32)
tf.Tensor(423692170000.0, shape=(), dtype=float32)
tf.Tensor(423689500000.0, shape=(), dtype=float32)
tf.Tensor(423686770000.0, shape=(), dtype=float32)
tf.Tensor(423684100000.0, shape=(), dtype=float32)
tf.Tensor(423681400000.0, shape=(), dtype=float32)
tf.Tensor(423678770000.0, shape=(), dtype=float32)
tf.Tensor(423676000000.0, shape=(), dtype=float32)
tf.Tensor(423673300000.0, shape=(), dtype=float32)
tf.Tensor(423670500000.0, shape=(), dtype=float32)
tf.Tensor(423667830000.0, shape=(), dtype=float32)
tf.Tensor(423665100000.0, shape=(), dtype=float32)
tf.Tensor(423662420000.0, shape=(), dtype=float32)
tf.Tensor(423659670000.0, shape=(), dtype=float32)
tf.Tensor(423657000000.0, shape=(), dtype=float32)
tf.Tensor(423654330000.0, shape=(), dtype=float32)
tf.Tensor(423651570000.0, shape=(), dtype=float32)
tf.Tensor(423648950000.0, shape=(), dtype=float32)
tf.Tensor(423646230000.0, shape=(), dtype=float32)
tf.Tensor(423643480000.0, shape=(), dtype=float32)
tf.Tensor(423640800000.0, shape=(), dtype=float32)
tf.Tensor(423638070000.0, shape=(), dtype=float32)
tf.Tensor(423635350000.0, shape=(), dtype=float32)
tf.Tensor(423632670000.0, shape=(), dtype=float32)
tf.Tensor(423629880000.0, shape=(), dtype=float32)
tf.Tensor(423627230000.0, shape=(), dtype=float32)
tf.Tensor(423624600000.0, shape=(), dtype=float32)
tf.Tensor(423621820000.0, shape=(), dtype=float32)
tf.Tensor(423619130000.0, shape=(), dtype=float32)
tf.Tensor(423616450000.0, shape=(), dtype=float32)
tf.Tensor(423613800000.0, shape=(), dtype=float32)
tf.Tensor(423611000000.0, shape=(), dtype=float32)
tf.Tensor(423608250000.0, shape=(), dtype=float32)
tf.Tensor(423605630000.0, shape=(), dtype=float32)
tf.Tensor(423602850000.0, shape=(), dtype=float32)
tf.Tensor(423600230000.0, shape=(), dtype=float32)
tf.Tensor(423597540000.0, shape=(), dtype=float32)
tf.Tensor(423594800000.0, shape=(), dtype=float32)
tf.Tensor(423592070000.0, shape=(), dtype=float32)
tf.Tensor(423589350000.0, shape=(), dtype=float32)
tf.Tensor(423586660000.0, shape=(), dtype=float32)
tf.Tensor(423583970000.0, shape=(), dtype=float32)
tf.Tensor(423581300000.0, shape=(), dtype=float32)
tf.Tensor(423578570000.0, shape=(), dtype=float32)
tf.Tensor(423575800000.0, shape=(), dtype=float32)
tf.Tensor(423573130000.0, shape=(), dtype=float32)
tf.Tensor(423570400000.0, shape=(), dtype=float32)
tf.Tensor(423567700000.0, shape=(), dtype=float32)
tf.Tensor(423565000000.0, shape=(), dtype=float32)
tf.Tensor(423562250000.0, shape=(), dtype=float32)
tf.Tensor(423559600000.0, shape=(), dtype=float32)
tf.Tensor(423556840000.0, shape=(), dtype=float32)
tf.Tensor(423554150000.0, shape=(), dtype=float32)
tf.Tensor(423551470000.0, shape=(), dtype=float32)
tf.Tensor(423548750000.0, shape=(), dtype=float32)
tf.Tensor(423546130000.0, shape=(), dtype=float32)
tf.Tensor(423543370000.0, shape=(), dtype=float32)
tf.Tensor(423540700000.0, shape=(), dtype=float32)
tf.Tensor(423537970000.0, shape=(), dtype=float32)
tf.Tensor(423535280000.0, shape=(), dtype=float32)
tf.Tensor(423532560000.0, shape=(), dtype=float32)
tf.Tensor(423529800000.0, shape=(), dtype=float32)
tf.Tensor(423527120000.0, shape=(), dtype=float32)
tf.Tensor(423524470000.0, shape=(), dtype=float32)
tf.Tensor(423521780000.0, shape=(), dtype=float32)
tf.Tensor(423519000000.0, shape=(), dtype=float32)
tf.Tensor(423516300000.0, shape=(), dtype=float32)
tf.Tensor(423513600000.0, shape=(), dtype=float32)
tf.Tensor(423510840000.0, shape=(), dtype=float32)
tf.Tensor(423508200000.0, shape=(), dtype=float32)
tf.Tensor(423505460000.0, shape=(), dtype=float32)
tf.Tensor(423502800000.0, shape=(), dtype=float32)
tf.Tensor(423500050000.0, shape=(), dtype=float32)
tf.Tensor(423497370000.0, shape=(), dtype=float32)
tf.Tensor(423494650000.0, shape=(), dtype=float32)
tf.Tensor(423491930000.0, shape=(), dtype=float32)
tf.Tensor(423489240000.0, shape=(), dtype=float32)
tf.Tensor(423486550000.0, shape=(), dtype=float32)
1.0991763 1.0991884

3.6 多元回归

tf.Tensor(424295700000.0, shape=(), dtype=float32)
tf.Tensor(424276920000.0, shape=(), dtype=float32)
tf.Tensor(424257220000.0, shape=(), dtype=float32)
tf.Tensor(424237070000.0, shape=(), dtype=float32)
tf.Tensor(424216560000.0, shape=(), dtype=float32)
tf.Tensor(424195850000.0, shape=(), dtype=float32)
tf.Tensor(424174780000.0, shape=(), dtype=float32)
tf.Tensor(424153740000.0, shape=(), dtype=float32)
tf.Tensor(424132540000.0, shape=(), dtype=float32)
tf.Tensor(424111370000.0, shape=(), dtype=float32)
tf.Tensor(424090100000.0, shape=(), dtype=float32)
tf.Tensor(424068740000.0, shape=(), dtype=float32)
tf.Tensor(424047380000.0, shape=(), dtype=float32)
tf.Tensor(424025900000.0, shape=(), dtype=float32)
tf.Tensor(424004520000.0, shape=(), dtype=float32)
tf.Tensor(423983150000.0, shape=(), dtype=float32)
tf.Tensor(423961760000.0, shape=(), dtype=float32)
tf.Tensor(423940300000.0, shape=(), dtype=float32)
tf.Tensor(423918730000.0, shape=(), dtype=float32)
tf.Tensor(423897300000.0, shape=(), dtype=float32)
tf.Tensor(423875900000.0, shape=(), dtype=float32)
tf.Tensor(423854340000.0, shape=(), dtype=float32)
tf.Tensor(423832900000.0, shape=(), dtype=float32)
tf.Tensor(423811400000.0, shape=(), dtype=float32)
tf.Tensor(423790050000.0, shape=(), dtype=float32)
tf.Tensor(423768520000.0, shape=(), dtype=float32)
tf.Tensor(423747060000.0, shape=(), dtype=float32)
tf.Tensor(423725560000.0, shape=(), dtype=float32)
tf.Tensor(423704200000.0, shape=(), dtype=float32)
tf.Tensor(423682740000.0, shape=(), dtype=float32)
tf.Tensor(423661340000.0, shape=(), dtype=float32)
tf.Tensor(423639880000.0, shape=(), dtype=float32)
tf.Tensor(423618540000.0, shape=(), dtype=float32)
tf.Tensor(423596980000.0, shape=(), dtype=float32)
tf.Tensor(423575620000.0, shape=(), dtype=float32)
tf.Tensor(423554150000.0, shape=(), dtype=float32)
tf.Tensor(423532900000.0, shape=(), dtype=float32)
tf.Tensor(423511520000.0, shape=(), dtype=float32)
tf.Tensor(423490220000.0, shape=(), dtype=float32)
tf.Tensor(423468830000.0, shape=(), dtype=float32)
tf.Tensor(423447460000.0, shape=(), dtype=float32)
tf.Tensor(423426130000.0, shape=(), dtype=float32)
tf.Tensor(423404830000.0, shape=(), dtype=float32)
tf.Tensor(423383500000.0, shape=(), dtype=float32)
tf.Tensor(423362200000.0, shape=(), dtype=float32)
tf.Tensor(423340830000.0, shape=(), dtype=float32)
tf.Tensor(423319670000.0, shape=(), dtype=float32)
tf.Tensor(423298270000.0, shape=(), dtype=float32)
tf.Tensor(423277040000.0, shape=(), dtype=float32)
tf.Tensor(423255740000.0, shape=(), dtype=float32)
tf.Tensor(423234500000.0, shape=(), dtype=float32)
tf.Tensor(423213270000.0, shape=(), dtype=float32)
tf.Tensor(423192000000.0, shape=(), dtype=float32)
tf.Tensor(423170740000.0, shape=(), dtype=float32)
tf.Tensor(423149570000.0, shape=(), dtype=float32)
tf.Tensor(423128300000.0, shape=(), dtype=float32)
tf.Tensor(423107130000.0, shape=(), dtype=float32)
tf.Tensor(423085870000.0, shape=(), dtype=float32)
tf.Tensor(423064670000.0, shape=(), dtype=float32)
tf.Tensor(423043560000.0, shape=(), dtype=float32)
tf.Tensor(423022400000.0, shape=(), dtype=float32)
tf.Tensor(423001230000.0, shape=(), dtype=float32)
tf.Tensor(422980000000.0, shape=(), dtype=float32)
tf.Tensor(422958900000.0, shape=(), dtype=float32)
tf.Tensor(422937720000.0, shape=(), dtype=float32)
tf.Tensor(422916620000.0, shape=(), dtype=float32)
tf.Tensor(422895500000.0, shape=(), dtype=float32)
tf.Tensor(422874350000.0, shape=(), dtype=float32)
tf.Tensor(422853280000.0, shape=(), dtype=float32)
tf.Tensor(422832100000.0, shape=(), dtype=float32)
tf.Tensor(422811040000.0, shape=(), dtype=float32)
tf.Tensor(422789970000.0, shape=(), dtype=float32)
tf.Tensor(422768870000.0, shape=(), dtype=float32)
tf.Tensor(422747800000.0, shape=(), dtype=float32)
tf.Tensor(422726730000.0, shape=(), dtype=float32)
tf.Tensor(422705730000.0, shape=(), dtype=float32)
tf.Tensor(422684660000.0, shape=(), dtype=float32)
tf.Tensor(422663650000.0, shape=(), dtype=float32)
tf.Tensor(422642500000.0, shape=(), dtype=float32)
tf.Tensor(422621580000.0, shape=(), dtype=float32)
tf.Tensor(422600540000.0, shape=(), dtype=float32)
tf.Tensor(422579570000.0, shape=(), dtype=float32)
tf.Tensor(422558530000.0, shape=(), dtype=float32)
tf.Tensor(422537500000.0, shape=(), dtype=float32)
tf.Tensor(422516500000.0, shape=(), dtype=float32)
tf.Tensor(422495600000.0, shape=(), dtype=float32)
tf.Tensor(422474500000.0, shape=(), dtype=float32)
tf.Tensor(422453600000.0, shape=(), dtype=float32)
tf.Tensor(422432570000.0, shape=(), dtype=float32)
tf.Tensor(422411600000.0, shape=(), dtype=float32)
tf.Tensor(422390760000.0, shape=(), dtype=float32)
tf.Tensor(422369800000.0, shape=(), dtype=float32)
tf.Tensor(422348880000.0, shape=(), dtype=float32)
tf.Tensor(422327900000.0, shape=(), dtype=float32)
tf.Tensor(422306970000.0, shape=(), dtype=float32)
tf.Tensor(422286130000.0, shape=(), dtype=float32)
tf.Tensor(422265200000.0, shape=(), dtype=float32)
tf.Tensor(422244300000.0, shape=(), dtype=float32)
tf.Tensor(422223300000.0, shape=(), dtype=float32)
tf.Tensor(422202500000.0, shape=(), dtype=float32)
tf.Tensor(422181700000.0, shape=(), dtype=float32)
tf.Tensor(422160730000.0, shape=(), dtype=float32)
tf.Tensor(422139920000.0, shape=(), dtype=float32)
tf.Tensor(422118980000.0, shape=(), dtype=float32)
tf.Tensor(422098170000.0, shape=(), dtype=float32)
tf.Tensor(422077270000.0, shape=(), dtype=float32)
tf.Tensor(422056460000.0, shape=(), dtype=float32)
tf.Tensor(422035700000.0, shape=(), dtype=float32)
tf.Tensor(422014780000.0, shape=(), dtype=float32)
tf.Tensor(421994040000.0, shape=(), dtype=float32)
tf.Tensor(421973200000.0, shape=(), dtype=float32)
tf.Tensor(421952420000.0, shape=(), dtype=float32)
tf.Tensor(421931600000.0, shape=(), dtype=float32)
tf.Tensor(421910840000.0, shape=(), dtype=float32)
tf.Tensor(421890030000.0, shape=(), dtype=float32)
tf.Tensor(421869260000.0, shape=(), dtype=float32)
tf.Tensor(421848480000.0, shape=(), dtype=float32)
tf.Tensor(421827770000.0, shape=(), dtype=float32)
tf.Tensor(421807000000.0, shape=(), dtype=float32)
tf.Tensor(421786120000.0, shape=(), dtype=float32)
tf.Tensor(421765480000.0, shape=(), dtype=float32)
tf.Tensor(421744740000.0, shape=(), dtype=float32)
tf.Tensor(421724030000.0, shape=(), dtype=float32)
tf.Tensor(421703300000.0, shape=(), dtype=float32)
tf.Tensor(421682500000.0, shape=(), dtype=float32)
tf.Tensor(421661840000.0, shape=(), dtype=float32)
tf.Tensor(421641130000.0, shape=(), dtype=float32)
tf.Tensor(421620450000.0, shape=(), dtype=float32)
tf.Tensor(421599770000.0, shape=(), dtype=float32)
tf.Tensor(421579060000.0, shape=(), dtype=float32)
tf.Tensor(421558400000.0, shape=(), dtype=float32)
tf.Tensor(421537740000.0, shape=(), dtype=float32)
tf.Tensor(421517070000.0, shape=(), dtype=float32)
tf.Tensor(421496400000.0, shape=(), dtype=float32)
tf.Tensor(421475750000.0, shape=(), dtype=float32)
tf.Tensor(421455070000.0, shape=(), dtype=float32)
tf.Tensor(421434500000.0, shape=(), dtype=float32)
tf.Tensor(421413850000.0, shape=(), dtype=float32)
tf.Tensor(421393200000.0, shape=(), dtype=float32)
tf.Tensor(421372500000.0, shape=(), dtype=float32)
tf.Tensor(421351900000.0, shape=(), dtype=float32)
tf.Tensor(421331270000.0, shape=(), dtype=float32)
tf.Tensor(421310700000.0, shape=(), dtype=float32)
tf.Tensor(421290100000.0, shape=(), dtype=float32)
tf.Tensor(421269570000.0, shape=(), dtype=float32)
tf.Tensor(421248900000.0, shape=(), dtype=float32)
tf.Tensor(421228300000.0, shape=(), dtype=float32)
tf.Tensor(421207770000.0, shape=(), dtype=float32)
tf.Tensor(421187300000.0, shape=(), dtype=float32)
tf.Tensor(421166600000.0, shape=(), dtype=float32)
tf.Tensor(421146170000.0, shape=(), dtype=float32)
tf.Tensor(421125520000.0, shape=(), dtype=float32)
tf.Tensor(421105000000.0, shape=(), dtype=float32)
tf.Tensor(421084560000.0, shape=(), dtype=float32)
tf.Tensor(421064050000.0, shape=(), dtype=float32)
tf.Tensor(421043500000.0, shape=(), dtype=float32)
tf.Tensor(421022900000.0, shape=(), dtype=float32)
tf.Tensor(421002440000.0, shape=(), dtype=float32)
tf.Tensor(420981960000.0, shape=(), dtype=float32)
tf.Tensor(420961450000.0, shape=(), dtype=float32)
tf.Tensor(420940840000.0, shape=(), dtype=float32)
tf.Tensor(420920460000.0, shape=(), dtype=float32)
tf.Tensor(420899980000.0, shape=(), dtype=float32)
tf.Tensor(420879530000.0, shape=(), dtype=float32)
tf.Tensor(420859120000.0, shape=(), dtype=float32)
tf.Tensor(420838540000.0, shape=(), dtype=float32)
tf.Tensor(420818120000.0, shape=(), dtype=float32)
tf.Tensor(420797700000.0, shape=(), dtype=float32)
tf.Tensor(420777260000.0, shape=(), dtype=float32)
tf.Tensor(420756750000.0, shape=(), dtype=float32)
tf.Tensor(420736400000.0, shape=(), dtype=float32)
tf.Tensor(420716020000.0, shape=(), dtype=float32)
tf.Tensor(420695540000.0, shape=(), dtype=float32)
tf.Tensor(420675100000.0, shape=(), dtype=float32)
tf.Tensor(420654700000.0, shape=(), dtype=float32)
tf.Tensor(420634300000.0, shape=(), dtype=float32)
tf.Tensor(420613880000.0, shape=(), dtype=float32)
tf.Tensor(420593500000.0, shape=(), dtype=float32)
tf.Tensor(420573150000.0, shape=(), dtype=float32)
tf.Tensor(420552770000.0, shape=(), dtype=float32)
tf.Tensor(420532360000.0, shape=(), dtype=float32)
tf.Tensor(420512070000.0, shape=(), dtype=float32)
tf.Tensor(420491720000.0, shape=(), dtype=float32)
tf.Tensor(420471400000.0, shape=(), dtype=float32)
tf.Tensor(420451020000.0, shape=(), dtype=float32)
tf.Tensor(420430640000.0, shape=(), dtype=float32)
tf.Tensor(420410330000.0, shape=(), dtype=float32)
tf.Tensor(420390000000.0, shape=(), dtype=float32)
tf.Tensor(420369700000.0, shape=(), dtype=float32)
tf.Tensor(420349350000.0, shape=(), dtype=float32)
tf.Tensor(420329060000.0, shape=(), dtype=float32)
tf.Tensor(420308750000.0, shape=(), dtype=float32)
tf.Tensor(420288460000.0, shape=(), dtype=float32)
tf.Tensor(420268200000.0, shape=(), dtype=float32)
tf.Tensor(420247900000.0, shape=(), dtype=float32)
tf.Tensor(420227600000.0, shape=(), dtype=float32)
tf.Tensor(420207400000.0, shape=(), dtype=float32)
tf.Tensor(420187080000.0, shape=(), dtype=float32)
tf.Tensor(420166760000.0, shape=(), dtype=float32)
tf.Tensor(420146540000.0, shape=(), dtype=float32)
tf.Tensor(420126330000.0, shape=(), dtype=float32)
tf.Tensor(420106100000.0, shape=(), dtype=float32)
tf.Tensor(420085830000.0, shape=(), dtype=float32)
tf.Tensor(420065600000.0, shape=(), dtype=float32)
tf.Tensor(420045400000.0, shape=(), dtype=float32)
tf.Tensor(420025170000.0, shape=(), dtype=float32)
tf.Tensor(420004900000.0, shape=(), dtype=float32)
tf.Tensor(419984830000.0, shape=(), dtype=float32)
tf.Tensor(419964550000.0, shape=(), dtype=float32)
tf.Tensor(419944330000.0, shape=(), dtype=float32)
tf.Tensor(419924200000.0, shape=(), dtype=float32)
tf.Tensor(419904000000.0, shape=(), dtype=float32)
tf.Tensor(419883780000.0, shape=(), dtype=float32)
tf.Tensor(419863600000.0, shape=(), dtype=float32)
tf.Tensor(419843540000.0, shape=(), dtype=float32)
tf.Tensor(419823350000.0, shape=(), dtype=float32)
tf.Tensor(419803140000.0, shape=(), dtype=float32)
tf.Tensor(419783020000.0, shape=(), dtype=float32)
tf.Tensor(419762960000.0, shape=(), dtype=float32)
tf.Tensor(419742740000.0, shape=(), dtype=float32)
tf.Tensor(419722720000.0, shape=(), dtype=float32)
tf.Tensor(419702440000.0, shape=(), dtype=float32)
tf.Tensor(419682400000.0, shape=(), dtype=float32)
tf.Tensor(419662300000.0, shape=(), dtype=float32)
tf.Tensor(419642150000.0, shape=(), dtype=float32)
tf.Tensor(419622060000.0, shape=(), dtype=float32)
tf.Tensor(419602100000.0, shape=(), dtype=float32)
tf.Tensor(419581920000.0, shape=(), dtype=float32)
tf.Tensor(419561800000.0, shape=(), dtype=float32)
tf.Tensor(419541780000.0, shape=(), dtype=float32)
tf.Tensor(419521700000.0, shape=(), dtype=float32)
tf.Tensor(419501570000.0, shape=(), dtype=float32)
tf.Tensor(419481550000.0, shape=(), dtype=float32)
tf.Tensor(419461500000.0, shape=(), dtype=float32)
tf.Tensor(419441400000.0, shape=(), dtype=float32)
tf.Tensor(419421360000.0, shape=(), dtype=float32)
tf.Tensor(419401370000.0, shape=(), dtype=float32)
tf.Tensor(419381350000.0, shape=(), dtype=float32)
tf.Tensor(419361330000.0, shape=(), dtype=float32)
tf.Tensor(419341270000.0, shape=(), dtype=float32)
tf.Tensor(419321250000.0, shape=(), dtype=float32)
tf.Tensor(419301260000.0, shape=(), dtype=float32)
tf.Tensor(419281300000.0, shape=(), dtype=float32)
tf.Tensor(419261380000.0, shape=(), dtype=float32)
tf.Tensor(419241300000.0, shape=(), dtype=float32)
tf.Tensor(419221440000.0, shape=(), dtype=float32)
tf.Tensor(419201350000.0, shape=(), dtype=float32)
tf.Tensor(419181460000.0, shape=(), dtype=float32)
tf.Tensor(419161440000.0, shape=(), dtype=float32)
tf.Tensor(419141450000.0, shape=(), dtype=float32)
tf.Tensor(419121600000.0, shape=(), dtype=float32)
tf.Tensor(419101600000.0, shape=(), dtype=float32)
tf.Tensor(419081600000.0, shape=(), dtype=float32)
tf.Tensor(419061660000.0, shape=(), dtype=float32)
tf.Tensor(419041800000.0, shape=(), dtype=float32)
tf.Tensor(419021800000.0, shape=(), dtype=float32)
tf.Tensor(419002000000.0, shape=(), dtype=float32)
tf.Tensor(418981970000.0, shape=(), dtype=float32)
tf.Tensor(418962100000.0, shape=(), dtype=float32)
tf.Tensor(418942220000.0, shape=(), dtype=float32)
tf.Tensor(418922270000.0, shape=(), dtype=float32)
tf.Tensor(418902470000.0, shape=(), dtype=float32)
tf.Tensor(418882600000.0, shape=(), dtype=float32)
tf.Tensor(418862700000.0, shape=(), dtype=float32)
tf.Tensor(418842770000.0, shape=(), dtype=float32)
tf.Tensor(418822950000.0, shape=(), dtype=float32)
tf.Tensor(418803120000.0, shape=(), dtype=float32)
tf.Tensor(418783300000.0, shape=(), dtype=float32)
tf.Tensor(418763370000.0, shape=(), dtype=float32)
tf.Tensor(418743520000.0, shape=(), dtype=float32)
tf.Tensor(418723630000.0, shape=(), dtype=float32)
tf.Tensor(418703870000.0, shape=(), dtype=float32)
tf.Tensor(418684100000.0, shape=(), dtype=float32)
tf.Tensor(418664220000.0, shape=(), dtype=float32)
tf.Tensor(418644460000.0, shape=(), dtype=float32)
tf.Tensor(418624570000.0, shape=(), dtype=float32)
tf.Tensor(418604800000.0, shape=(), dtype=float32)
tf.Tensor(418585020000.0, shape=(), dtype=float32)
tf.Tensor(418565260000.0, shape=(), dtype=float32)
tf.Tensor(418545430000.0, shape=(), dtype=float32)
tf.Tensor(418525700000.0, shape=(), dtype=float32)
tf.Tensor(418505900000.0, shape=(), dtype=float32)
tf.Tensor(418486160000.0, shape=(), dtype=float32)
tf.Tensor(418466330000.0, shape=(), dtype=float32)
tf.Tensor(418446640000.0, shape=(), dtype=float32)
tf.Tensor(418426850000.0, shape=(), dtype=float32)
tf.Tensor(418407200000.0, shape=(), dtype=float32)
tf.Tensor(418387360000.0, shape=(), dtype=float32)
tf.Tensor(418367670000.0, shape=(), dtype=float32)
tf.Tensor(418347900000.0, shape=(), dtype=float32)
tf.Tensor(418328150000.0, shape=(), dtype=float32)
tf.Tensor(418308460000.0, shape=(), dtype=float32)
tf.Tensor(418288730000.0, shape=(), dtype=float32)
tf.Tensor(418269100000.0, shape=(), dtype=float32)
tf.Tensor(418249440000.0, shape=(), dtype=float32)
tf.Tensor(418229700000.0, shape=(), dtype=float32)
tf.Tensor(418210000000.0, shape=(), dtype=float32)
tf.Tensor(418190360000.0, shape=(), dtype=float32)
tf.Tensor(418170630000.0, shape=(), dtype=float32)
tf.Tensor(418150970000.0, shape=(), dtype=float32)
tf.Tensor(418131350000.0, shape=(), dtype=float32)
tf.Tensor(418111700000.0, shape=(), dtype=float32)
tf.Tensor(418092000000.0, shape=(), dtype=float32)
tf.Tensor(418072400000.0, shape=(), dtype=float32)
tf.Tensor(418052700000.0, shape=(), dtype=float32)
tf.Tensor(418033070000.0, shape=(), dtype=float32)
tf.Tensor(418013450000.0, shape=(), dtype=float32)
tf.Tensor(417993850000.0, shape=(), dtype=float32)
tf.Tensor(417974220000.0, shape=(), dtype=float32)
tf.Tensor(417954630000.0, shape=(), dtype=float32)
tf.Tensor(417935000000.0, shape=(), dtype=float32)
tf.Tensor(417915400000.0, shape=(), dtype=float32)
tf.Tensor(417895800000.0, shape=(), dtype=float32)
tf.Tensor(417876250000.0, shape=(), dtype=float32)
tf.Tensor(417856700000.0, shape=(), dtype=float32)
tf.Tensor(417837060000.0, shape=(), dtype=float32)
tf.Tensor(417817460000.0, shape=(), dtype=float32)
tf.Tensor(417797900000.0, shape=(), dtype=float32)
tf.Tensor(417778340000.0, shape=(), dtype=float32)
tf.Tensor(417758740000.0, shape=(), dtype=float32)
tf.Tensor(417739280000.0, shape=(), dtype=float32)
tf.Tensor(417719700000.0, shape=(), dtype=float32)
tf.Tensor(417700120000.0, shape=(), dtype=float32)
tf.Tensor(417680650000.0, shape=(), dtype=float32)
tf.Tensor(417661120000.0, shape=(), dtype=float32)
tf.Tensor(417641600000.0, shape=(), dtype=float32)
tf.Tensor(417622030000.0, shape=(), dtype=float32)
tf.Tensor(417602570000.0, shape=(), dtype=float32)
tf.Tensor(417583040000.0, shape=(), dtype=float32)
tf.Tensor(417563540000.0, shape=(), dtype=float32)
tf.Tensor(417544040000.0, shape=(), dtype=float32)
tf.Tensor(417524550000.0, shape=(), dtype=float32)
tf.Tensor(417505100000.0, shape=(), dtype=float32)
tf.Tensor(417485620000.0, shape=(), dtype=float32)
tf.Tensor(417466120000.0, shape=(), dtype=float32)
tf.Tensor(417446660000.0, shape=(), dtype=float32)
tf.Tensor(417427230000.0, shape=(), dtype=float32)
tf.Tensor(417407700000.0, shape=(), dtype=float32)
tf.Tensor(417388330000.0, shape=(), dtype=float32)
tf.Tensor(417368870000.0, shape=(), dtype=float32)
tf.Tensor(417349370000.0, shape=(), dtype=float32)
tf.Tensor(417329970000.0, shape=(), dtype=float32)
tf.Tensor(417310540000.0, shape=(), dtype=float32)
tf.Tensor(417291140000.0, shape=(), dtype=float32)
tf.Tensor(417271780000.0, shape=(), dtype=float32)
tf.Tensor(417252340000.0, shape=(), dtype=float32)
tf.Tensor(417232980000.0, shape=(), dtype=float32)
tf.Tensor(417213500000.0, shape=(), dtype=float32)
tf.Tensor(417194080000.0, shape=(), dtype=float32)
tf.Tensor(417174750000.0, shape=(), dtype=float32)
tf.Tensor(417155320000.0, shape=(), dtype=float32)
tf.Tensor(417136020000.0, shape=(), dtype=float32)
tf.Tensor(417116620000.0, shape=(), dtype=float32)
tf.Tensor(417097250000.0, shape=(), dtype=float32)
tf.Tensor(417077920000.0, shape=(), dtype=float32)
tf.Tensor(417058550000.0, shape=(), dtype=float32)
tf.Tensor(417039300000.0, shape=(), dtype=float32)
tf.Tensor(417019850000.0, shape=(), dtype=float32)
tf.Tensor(417000520000.0, shape=(), dtype=float32)
tf.Tensor(416981250000.0, shape=(), dtype=float32)
tf.Tensor(416961900000.0, shape=(), dtype=float32)
tf.Tensor(416942560000.0, shape=(), dtype=float32)
tf.Tensor(416923220000.0, shape=(), dtype=float32)
tf.Tensor(416904000000.0, shape=(), dtype=float32)
tf.Tensor(416884620000.0, shape=(), dtype=float32)
tf.Tensor(416865300000.0, shape=(), dtype=float32)
tf.Tensor(416846020000.0, shape=(), dtype=float32)
tf.Tensor(416826800000.0, shape=(), dtype=float32)
tf.Tensor(416807520000.0, shape=(), dtype=float32)
tf.Tensor(416788200000.0, shape=(), dtype=float32)
tf.Tensor(416768920000.0, shape=(), dtype=float32)
tf.Tensor(416749680000.0, shape=(), dtype=float32)
tf.Tensor(416730480000.0, shape=(), dtype=float32)
tf.Tensor(416711150000.0, shape=(), dtype=float32)
tf.Tensor(416691880000.0, shape=(), dtype=float32)
tf.Tensor(416672650000.0, shape=(), dtype=float32)
tf.Tensor(416653440000.0, shape=(), dtype=float32)
tf.Tensor(416634180000.0, shape=(), dtype=float32)
tf.Tensor(416614970000.0, shape=(), dtype=float32)
tf.Tensor(416595770000.0, shape=(), dtype=float32)
tf.Tensor(416576500000.0, shape=(), dtype=float32)
tf.Tensor(416557370000.0, shape=(), dtype=float32)
tf.Tensor(416538170000.0, shape=(), dtype=float32)
tf.Tensor(416518960000.0, shape=(), dtype=float32)
tf.Tensor(416499760000.0, shape=(), dtype=float32)
tf.Tensor(416480560000.0, shape=(), dtype=float32)
tf.Tensor(416461400000.0, shape=(), dtype=float32)
tf.Tensor(416442220000.0, shape=(), dtype=float32)
tf.Tensor(416423050000.0, shape=(), dtype=float32)
tf.Tensor(416403880000.0, shape=(), dtype=float32)
tf.Tensor(416384700000.0, shape=(), dtype=float32)
tf.Tensor(416365580000.0, shape=(), dtype=float32)
tf.Tensor(416346370000.0, shape=(), dtype=float32)
tf.Tensor(416327300000.0, shape=(), dtype=float32)
tf.Tensor(416308130000.0, shape=(), dtype=float32)
tf.Tensor(416288960000.0, shape=(), dtype=float32)
tf.Tensor(416269860000.0, shape=(), dtype=float32)
tf.Tensor(416250700000.0, shape=(), dtype=float32)
tf.Tensor(416231700000.0, shape=(), dtype=float32)
tf.Tensor(416212550000.0, shape=(), dtype=float32)
tf.Tensor(416193480000.0, shape=(), dtype=float32)
tf.Tensor(416174300000.0, shape=(), dtype=float32)
tf.Tensor(416155270000.0, shape=(), dtype=float32)
tf.Tensor(416136130000.0, shape=(), dtype=float32)
tf.Tensor(416117130000.0, shape=(), dtype=float32)
tf.Tensor(416098030000.0, shape=(), dtype=float32)
tf.Tensor(416078920000.0, shape=(), dtype=float32)
tf.Tensor(416059880000.0, shape=(), dtype=float32)
tf.Tensor(416040800000.0, shape=(), dtype=float32)
tf.Tensor(416021770000.0, shape=(), dtype=float32)
tf.Tensor(416002700000.0, shape=(), dtype=float32)
tf.Tensor(415983730000.0, shape=(), dtype=float32)
tf.Tensor(415964630000.0, shape=(), dtype=float32)
tf.Tensor(415945560000.0, shape=(), dtype=float32)
tf.Tensor(415926580000.0, shape=(), dtype=float32)
tf.Tensor(415907600000.0, shape=(), dtype=float32)
tf.Tensor(415888540000.0, shape=(), dtype=float32)
tf.Tensor(415869570000.0, shape=(), dtype=float32)
tf.Tensor(415850530000.0, shape=(), dtype=float32)
tf.Tensor(415831500000.0, shape=(), dtype=float32)
tf.Tensor(415812580000.0, shape=(), dtype=float32)
tf.Tensor(415793500000.0, shape=(), dtype=float32)
tf.Tensor(415774500000.0, shape=(), dtype=float32)
tf.Tensor(415755600000.0, shape=(), dtype=float32)
tf.Tensor(415736560000.0, shape=(), dtype=float32)
tf.Tensor(415717650000.0, shape=(), dtype=float32)
tf.Tensor(415698700000.0, shape=(), dtype=float32)
tf.Tensor(415679740000.0, shape=(), dtype=float32)
tf.Tensor(415660830000.0, shape=(), dtype=float32)
tf.Tensor(415641860000.0, shape=(), dtype=float32)
tf.Tensor(415622860000.0, shape=(), dtype=float32)
tf.Tensor(415603950000.0, shape=(), dtype=float32)
tf.Tensor(415585040000.0, shape=(), dtype=float32)
tf.Tensor(415566100000.0, shape=(), dtype=float32)
tf.Tensor(415547160000.0, shape=(), dtype=float32)
tf.Tensor(415528260000.0, shape=(), dtype=float32)
tf.Tensor(415509380000.0, shape=(), dtype=float32)
tf.Tensor(415490500000.0, shape=(), dtype=float32)
tf.Tensor(415471570000.0, shape=(), dtype=float32)
tf.Tensor(415452730000.0, shape=(), dtype=float32)
tf.Tensor(415433750000.0, shape=(), dtype=float32)
tf.Tensor(415414940000.0, shape=(), dtype=float32)
tf.Tensor(415396100000.0, shape=(), dtype=float32)
tf.Tensor(415377200000.0, shape=(), dtype=float32)
tf.Tensor(415358300000.0, shape=(), dtype=float32)
tf.Tensor(415339480000.0, shape=(), dtype=float32)
tf.Tensor(415320570000.0, shape=(), dtype=float32)
tf.Tensor(415301800000.0, shape=(), dtype=float32)
tf.Tensor(415282900000.0, shape=(), dtype=float32)
tf.Tensor(415264100000.0, shape=(), dtype=float32)
tf.Tensor(415245270000.0, shape=(), dtype=float32)
tf.Tensor(415226430000.0, shape=(), dtype=float32)
tf.Tensor(415207620000.0, shape=(), dtype=float32)
tf.Tensor(415188840000.0, shape=(), dtype=float32)
tf.Tensor(415170040000.0, shape=(), dtype=float32)
tf.Tensor(415151230000.0, shape=(), dtype=float32)
tf.Tensor(415132400000.0, shape=(), dtype=float32)
tf.Tensor(415113580000.0, shape=(), dtype=float32)
tf.Tensor(415094830000.0, shape=(), dtype=float32)
tf.Tensor(415076020000.0, shape=(), dtype=float32)
tf.Tensor(415057280000.0, shape=(), dtype=float32)
tf.Tensor(415038500000.0, shape=(), dtype=float32)
tf.Tensor(415019760000.0, shape=(), dtype=float32)
tf.Tensor(415000950000.0, shape=(), dtype=float32)
tf.Tensor(414982180000.0, shape=(), dtype=float32)
tf.Tensor(414963470000.0, shape=(), dtype=float32)
tf.Tensor(414944720000.0, shape=(), dtype=float32)
tf.Tensor(414926000000.0, shape=(), dtype=float32)
tf.Tensor(414907270000.0, shape=(), dtype=float32)
tf.Tensor(414888530000.0, shape=(), dtype=float32)
tf.Tensor(414869800000.0, shape=(), dtype=float32)
tf.Tensor(414851100000.0, shape=(), dtype=float32)
tf.Tensor(414832400000.0, shape=(), dtype=float32)
tf.Tensor(414813700000.0, shape=(), dtype=float32)
tf.Tensor(414794970000.0, shape=(), dtype=float32)
tf.Tensor(414776300000.0, shape=(), dtype=float32)
tf.Tensor(414757550000.0, shape=(), dtype=float32)
tf.Tensor(414738940000.0, shape=(), dtype=float32)
tf.Tensor(414720200000.0, shape=(), dtype=float32)
tf.Tensor(414701620000.0, shape=(), dtype=float32)
tf.Tensor(414682840000.0, shape=(), dtype=float32)
tf.Tensor(414664260000.0, shape=(), dtype=float32)
tf.Tensor(414645620000.0, shape=(), dtype=float32)
tf.Tensor(414626940000.0, shape=(), dtype=float32)
tf.Tensor(414608330000.0, shape=(), dtype=float32)
tf.Tensor(414589650000.0, shape=(), dtype=float32)
tf.Tensor(414571040000.0, shape=(), dtype=float32)
tf.Tensor(414552420000.0, shape=(), dtype=float32)
tf.Tensor(414533700000.0, shape=(), dtype=float32)
tf.Tensor(414515170000.0, shape=(), dtype=float32)
tf.Tensor(414496500000.0, shape=(), dtype=float32)
tf.Tensor(414477940000.0, shape=(), dtype=float32)
tf.Tensor(414459330000.0, shape=(), dtype=float32)
tf.Tensor(414440700000.0, shape=(), dtype=float32)
tf.Tensor(414422100000.0, shape=(), dtype=float32)
tf.Tensor(414403600000.0, shape=(), dtype=float32)
tf.Tensor(414384950000.0, shape=(), dtype=float32)
tf.Tensor(414366430000.0, shape=(), dtype=float32)
tf.Tensor(414347760000.0, shape=(), dtype=float32)
tf.Tensor(414329270000.0, shape=(), dtype=float32)
tf.Tensor(414310660000.0, shape=(), dtype=float32)
tf.Tensor(414292100000.0, shape=(), dtype=float32)
tf.Tensor(414273540000.0, shape=(), dtype=float32)
tf.Tensor(414255050000.0, shape=(), dtype=float32)
tf.Tensor(414236480000.0, shape=(), dtype=float32)
tf.Tensor(414217960000.0, shape=(), dtype=float32)
tf.Tensor(414199450000.0, shape=(), dtype=float32)
tf.Tensor(414180930000.0, shape=(), dtype=float32)
tf.Tensor(414162350000.0, shape=(), dtype=float32)
tf.Tensor(414143840000.0, shape=(), dtype=float32)
tf.Tensor(414125330000.0, shape=(), dtype=float32)
tf.Tensor(414106800000.0, shape=(), dtype=float32)
tf.Tensor(414088330000.0, shape=(), dtype=float32)
tf.Tensor(414069820000.0, shape=(), dtype=float32)
tf.Tensor(414051340000.0, shape=(), dtype=float32)
tf.Tensor(414032900000.0, shape=(), dtype=float32)
tf.Tensor(414014440000.0, shape=(), dtype=float32)
tf.Tensor(413996000000.0, shape=(), dtype=float32)
tf.Tensor(413977540000.0, shape=(), dtype=float32)
tf.Tensor(413959030000.0, shape=(), dtype=float32)
tf.Tensor(413940500000.0, shape=(), dtype=float32)
tf.Tensor(413922100000.0, shape=(), dtype=float32)
tf.Tensor(413903750000.0, shape=(), dtype=float32)
tf.Tensor(413885270000.0, shape=(), dtype=float32)
tf.Tensor(413866800000.0, shape=(), dtype=float32)
tf.Tensor(413848470000.0, shape=(), dtype=float32)
tf.Tensor(413829960000.0, shape=(), dtype=float32)
tf.Tensor(413811540000.0, shape=(), dtype=float32)
tf.Tensor(413793120000.0, shape=(), dtype=float32)
tf.Tensor(413774770000.0, shape=(), dtype=float32)
tf.Tensor(413756260000.0, shape=(), dtype=float32)
tf.Tensor(413737900000.0, shape=(), dtype=float32)
tf.Tensor(413719500000.0, shape=(), dtype=float32)
tf.Tensor(413701200000.0, shape=(), dtype=float32)
tf.Tensor(413682760000.0, shape=(), dtype=float32)
tf.Tensor(413664440000.0, shape=(), dtype=float32)
tf.Tensor(413646000000.0, shape=(), dtype=float32)
tf.Tensor(413627700000.0, shape=(), dtype=float32)
tf.Tensor(413609300000.0, shape=(), dtype=float32)
tf.Tensor(413590980000.0, shape=(), dtype=float32)
tf.Tensor(413572630000.0, shape=(), dtype=float32)
tf.Tensor(413554250000.0, shape=(), dtype=float32)
tf.Tensor(413535960000.0, shape=(), dtype=float32)
tf.Tensor(413517640000.0, shape=(), dtype=float32)
tf.Tensor(413499260000.0, shape=(), dtype=float32)
tf.Tensor(413480900000.0, shape=(), dtype=float32)
tf.Tensor(413462660000.0, shape=(), dtype=float32)
tf.Tensor(413444340000.0, shape=(), dtype=float32)
tf.Tensor(413425960000.0, shape=(), dtype=float32)
tf.Tensor(413407740000.0, shape=(), dtype=float32)
tf.Tensor(413389400000.0, shape=(), dtype=float32)
tf.Tensor(413371140000.0, shape=(), dtype=float32)
tf.Tensor(413352850000.0, shape=(), dtype=float32)
tf.Tensor(413334600000.0, shape=(), dtype=float32)
tf.Tensor(413316320000.0, shape=(), dtype=float32)
tf.Tensor(413298030000.0, shape=(), dtype=float32)
tf.Tensor(413279780000.0, shape=(), dtype=float32)
tf.Tensor(413261500000.0, shape=(), dtype=float32)
tf.Tensor(413243240000.0, shape=(), dtype=float32)
tf.Tensor(413225030000.0, shape=(), dtype=float32)
tf.Tensor(413206770000.0, shape=(), dtype=float32)
tf.Tensor(413188550000.0, shape=(), dtype=float32)
tf.Tensor(413170270000.0, shape=(), dtype=float32)
tf.Tensor(413152120000.0, shape=(), dtype=float32)
tf.Tensor(413133860000.0, shape=(), dtype=float32)
tf.Tensor(413115600000.0, shape=(), dtype=float32)
tf.Tensor(413097430000.0, shape=(), dtype=float32)
tf.Tensor(413079240000.0, shape=(), dtype=float32)
tf.Tensor(413061000000.0, shape=(), dtype=float32)
tf.Tensor(413042800000.0, shape=(), dtype=float32)
tf.Tensor(413024620000.0, shape=(), dtype=float32)
tf.Tensor(413006430000.0, shape=(), dtype=float32)
tf.Tensor(412988200000.0, shape=(), dtype=float32)
tf.Tensor(412970120000.0, shape=(), dtype=float32)
tf.Tensor(412951870000.0, shape=(), dtype=float32)
tf.Tensor(412933750000.0, shape=(), dtype=float32)
tf.Tensor(412915600000.0, shape=(), dtype=float32)
tf.Tensor(412897400000.0, shape=(), dtype=float32)
tf.Tensor(412879320000.0, shape=(), dtype=float32)
tf.Tensor(412861170000.0, shape=(), dtype=float32)
tf.Tensor(412843050000.0, shape=(), dtype=float32)
tf.Tensor(412824860000.0, shape=(), dtype=float32)
tf.Tensor(412806770000.0, shape=(), dtype=float32)
tf.Tensor(412788650000.0, shape=(), dtype=float32)
tf.Tensor(412770470000.0, shape=(), dtype=float32)
tf.Tensor(412752350000.0, shape=(), dtype=float32)
tf.Tensor(412734260000.0, shape=(), dtype=float32)
tf.Tensor(412716140000.0, shape=(), dtype=float32)
tf.Tensor(412698050000.0, shape=(), dtype=float32)
tf.Tensor(412680000000.0, shape=(), dtype=float32)
tf.Tensor(412661900000.0, shape=(), dtype=float32)
tf.Tensor(412643820000.0, shape=(), dtype=float32)
tf.Tensor(412625760000.0, shape=(), dtype=float32)
tf.Tensor(412607640000.0, shape=(), dtype=float32)
tf.Tensor(412589650000.0, shape=(), dtype=float32)
tf.Tensor(412571600000.0, shape=(), dtype=float32)
tf.Tensor(412553500000.0, shape=(), dtype=float32)
tf.Tensor(412535500000.0, shape=(), dtype=float32)
tf.Tensor(412517400000.0, shape=(), dtype=float32)
tf.Tensor(412499380000.0, shape=(), dtype=float32)
tf.Tensor(412481300000.0, shape=(), dtype=float32)
tf.Tensor(412463330000.0, shape=(), dtype=float32)
tf.Tensor(412445280000.0, shape=(), dtype=float32)
tf.Tensor(412427260000.0, shape=(), dtype=float32)
tf.Tensor(412409270000.0, shape=(), dtype=float32)
tf.Tensor(412391200000.0, shape=(), dtype=float32)
tf.Tensor(412373220000.0, shape=(), dtype=float32)
tf.Tensor(412355300000.0, shape=(), dtype=float32)
tf.Tensor(412337240000.0, shape=(), dtype=float32)
tf.Tensor(412319200000.0, shape=(), dtype=float32)
tf.Tensor(412301300000.0, shape=(), dtype=float32)
tf.Tensor(412283300000.0, shape=(), dtype=float32)
tf.Tensor(412265350000.0, shape=(), dtype=float32)
tf.Tensor(412247430000.0, shape=(), dtype=float32)
tf.Tensor(412229400000.0, shape=(), dtype=float32)
tf.Tensor(412211480000.0, shape=(), dtype=float32)
tf.Tensor(412193520000.0, shape=(), dtype=float32)
tf.Tensor(412175560000.0, shape=(), dtype=float32)
tf.Tensor(412157640000.0, shape=(), dtype=float32)
tf.Tensor(412139700000.0, shape=(), dtype=float32)
tf.Tensor(412121800000.0, shape=(), dtype=float32)
tf.Tensor(412103840000.0, shape=(), dtype=float32)
tf.Tensor(412085900000.0, shape=(), dtype=float32)
tf.Tensor(412068050000.0, shape=(), dtype=float32)
tf.Tensor(412050030000.0, shape=(), dtype=float32)
tf.Tensor(412032200000.0, shape=(), dtype=float32)
tf.Tensor(412014300000.0, shape=(), dtype=float32)
tf.Tensor(411996360000.0, shape=(), dtype=float32)
tf.Tensor(411978500000.0, shape=(), dtype=float32)
tf.Tensor(411960600000.0, shape=(), dtype=float32)
tf.Tensor(411942780000.0, shape=(), dtype=float32)
tf.Tensor(411924860000.0, shape=(), dtype=float32)
tf.Tensor(411907000000.0, shape=(), dtype=float32)
tf.Tensor(411889070000.0, shape=(), dtype=float32)
tf.Tensor(411871250000.0, shape=(), dtype=float32)
tf.Tensor(411853360000.0, shape=(), dtype=float32)
tf.Tensor(411835600000.0, shape=(), dtype=float32)
tf.Tensor(411817770000.0, shape=(), dtype=float32)
tf.Tensor(411799850000.0, shape=(), dtype=float32)
tf.Tensor(411782000000.0, shape=(), dtype=float32)
tf.Tensor(411764200000.0, shape=(), dtype=float32)
tf.Tensor(411746370000.0, shape=(), dtype=float32)
tf.Tensor(411728580000.0, shape=(), dtype=float32)
tf.Tensor(411710800000.0, shape=(), dtype=float32)
tf.Tensor(411693000000.0, shape=(), dtype=float32)
tf.Tensor(411675130000.0, shape=(), dtype=float32)
tf.Tensor(411657340000.0, shape=(), dtype=float32)
tf.Tensor(411639600000.0, shape=(), dtype=float32)
tf.Tensor(411621800000.0, shape=(), dtype=float32)
tf.Tensor(411604030000.0, shape=(), dtype=float32)
tf.Tensor(411586230000.0, shape=(), dtype=float32)
tf.Tensor(411568470000.0, shape=(), dtype=float32)
tf.Tensor(411550680000.0, shape=(), dtype=float32)
tf.Tensor(411532920000.0, shape=(), dtype=float32)
tf.Tensor(411515160000.0, shape=(), dtype=float32)
tf.Tensor(411497460000.0, shape=(), dtype=float32)
tf.Tensor(411479640000.0, shape=(), dtype=float32)
tf.Tensor(411461940000.0, shape=(), dtype=float32)
tf.Tensor(411444200000.0, shape=(), dtype=float32)
tf.Tensor(411426520000.0, shape=(), dtype=float32)
tf.Tensor(411408700000.0, shape=(), dtype=float32)
tf.Tensor(411391070000.0, shape=(), dtype=float32)
tf.Tensor(411373300000.0, shape=(), dtype=float32)
tf.Tensor(411355680000.0, shape=(), dtype=float32)
tf.Tensor(411337920000.0, shape=(), dtype=float32)
tf.Tensor(411320200000.0, shape=(), dtype=float32)
tf.Tensor(411302530000.0, shape=(), dtype=float32)
tf.Tensor(411284830000.0, shape=(), dtype=float32)
tf.Tensor(411267170000.0, shape=(), dtype=float32)
tf.Tensor(411249480000.0, shape=(), dtype=float32)
tf.Tensor(411231750000.0, shape=(), dtype=float32)
tf.Tensor(411214120000.0, shape=(), dtype=float32)
tf.Tensor(411196460000.0, shape=(), dtype=float32)
tf.Tensor(411178800000.0, shape=(), dtype=float32)
tf.Tensor(411161230000.0, shape=(), dtype=float32)
tf.Tensor(411143440000.0, shape=(), dtype=float32)
tf.Tensor(411125800000.0, shape=(), dtype=float32)
tf.Tensor(411108250000.0, shape=(), dtype=float32)
tf.Tensor(411090580000.0, shape=(), dtype=float32)
tf.Tensor(411072950000.0, shape=(), dtype=float32)
tf.Tensor(411055300000.0, shape=(), dtype=float32)
tf.Tensor(411037730000.0, shape=(), dtype=float32)
tf.Tensor(411020070000.0, shape=(), dtype=float32)
tf.Tensor(411002500000.0, shape=(), dtype=float32)
tf.Tensor(410984940000.0, shape=(), dtype=float32)
tf.Tensor(410967280000.0, shape=(), dtype=float32)
tf.Tensor(410949650000.0, shape=(), dtype=float32)
tf.Tensor(410932100000.0, shape=(), dtype=float32)
tf.Tensor(410914550000.0, shape=(), dtype=float32)
tf.Tensor(410896960000.0, shape=(), dtype=float32)
tf.Tensor(410879360000.0, shape=(), dtype=float32)
tf.Tensor(410861830000.0, shape=(), dtype=float32)
tf.Tensor(410844300000.0, shape=(), dtype=float32)
tf.Tensor(410826700000.0, shape=(), dtype=float32)
tf.Tensor(410809170000.0, shape=(), dtype=float32)
tf.Tensor(410791580000.0, shape=(), dtype=float32)
tf.Tensor(410774080000.0, shape=(), dtype=float32)
tf.Tensor(410756500000.0, shape=(), dtype=float32)
tf.Tensor(410738950000.0, shape=(), dtype=float32)
tf.Tensor(410721500000.0, shape=(), dtype=float32)
tf.Tensor(410703900000.0, shape=(), dtype=float32)
tf.Tensor(410686400000.0, shape=(), dtype=float32)
tf.Tensor(410668860000.0, shape=(), dtype=float32)
tf.Tensor(410651360000.0, shape=(), dtype=float32)
tf.Tensor(410633900000.0, shape=(), dtype=float32)
tf.Tensor(410616430000.0, shape=(), dtype=float32)
tf.Tensor(410598970000.0, shape=(), dtype=float32)
tf.Tensor(410581370000.0, shape=(), dtype=float32)
tf.Tensor(410563940000.0, shape=(), dtype=float32)
tf.Tensor(410546470000.0, shape=(), dtype=float32)
tf.Tensor(410528970000.0, shape=(), dtype=float32)
tf.Tensor(410511540000.0, shape=(), dtype=float32)
tf.Tensor(410494040000.0, shape=(), dtype=float32)
tf.Tensor(410476600000.0, shape=(), dtype=float32)
tf.Tensor(410459180000.0, shape=(), dtype=float32)
tf.Tensor(410441700000.0, shape=(), dtype=float32)
tf.Tensor(410424280000.0, shape=(), dtype=float32)
tf.Tensor(410406880000.0, shape=(), dtype=float32)
tf.Tensor(410389400000.0, shape=(), dtype=float32)
tf.Tensor(410371980000.0, shape=(), dtype=float32)
tf.Tensor(410354580000.0, shape=(), dtype=float32)
tf.Tensor(410337100000.0, shape=(), dtype=float32)
tf.Tensor(410319700000.0, shape=(), dtype=float32)
tf.Tensor(410302300000.0, shape=(), dtype=float32)
tf.Tensor(410284950000.0, shape=(), dtype=float32)
tf.Tensor(410267500000.0, shape=(), dtype=float32)
tf.Tensor(410250150000.0, shape=(), dtype=float32)
tf.Tensor(410232780000.0, shape=(), dtype=float32)
tf.Tensor(410215300000.0, shape=(), dtype=float32)
tf.Tensor(410198000000.0, shape=(), dtype=float32)
tf.Tensor(410180600000.0, shape=(), dtype=float32)
tf.Tensor(410163280000.0, shape=(), dtype=float32)
tf.Tensor(410145880000.0, shape=(), dtype=float32)
tf.Tensor(410128580000.0, shape=(), dtype=float32)
tf.Tensor(410111180000.0, shape=(), dtype=float32)
tf.Tensor(410093850000.0, shape=(), dtype=float32)
tf.Tensor(410076480000.0, shape=(), dtype=float32)
tf.Tensor(410059180000.0, shape=(), dtype=float32)
tf.Tensor(410041800000.0, shape=(), dtype=float32)
tf.Tensor(410024500000.0, shape=(), dtype=float32)
tf.Tensor(410007200000.0, shape=(), dtype=float32)
tf.Tensor(409989940000.0, shape=(), dtype=float32)
tf.Tensor(409972600000.0, shape=(), dtype=float32)
tf.Tensor(409955200000.0, shape=(), dtype=float32)
tf.Tensor(409937970000.0, shape=(), dtype=float32)
tf.Tensor(409920670000.0, shape=(), dtype=float32)
tf.Tensor(409903330000.0, shape=(), dtype=float32)
tf.Tensor(409886100000.0, shape=(), dtype=float32)
tf.Tensor(409868800000.0, shape=(), dtype=float32)
tf.Tensor(409851530000.0, shape=(), dtype=float32)
tf.Tensor(409834300000.0, shape=(), dtype=float32)
tf.Tensor(409817000000.0, shape=(), dtype=float32)
tf.Tensor(409799750000.0, shape=(), dtype=float32)
tf.Tensor(409782450000.0, shape=(), dtype=float32)
tf.Tensor(409765200000.0, shape=(), dtype=float32)
tf.Tensor(409747950000.0, shape=(), dtype=float32)
tf.Tensor(409730700000.0, shape=(), dtype=float32)
tf.Tensor(409713480000.0, shape=(), dtype=float32)
tf.Tensor(409696300000.0, shape=(), dtype=float32)
tf.Tensor(409679070000.0, shape=(), dtype=float32)
tf.Tensor(409661830000.0, shape=(), dtype=float32)
tf.Tensor(409644560000.0, shape=(), dtype=float32)
tf.Tensor(409627360000.0, shape=(), dtype=float32)
tf.Tensor(409610200000.0, shape=(), dtype=float32)
tf.Tensor(409593000000.0, shape=(), dtype=float32)
tf.Tensor(409575750000.0, shape=(), dtype=float32)
tf.Tensor(409558600000.0, shape=(), dtype=float32)
tf.Tensor(409541380000.0, shape=(), dtype=float32)
tf.Tensor(409524200000.0, shape=(), dtype=float32)
tf.Tensor(409507070000.0, shape=(), dtype=float32)
tf.Tensor(409489900000.0, shape=(), dtype=float32)
tf.Tensor(409472760000.0, shape=(), dtype=float32)
tf.Tensor(409455530000.0, shape=(), dtype=float32)
tf.Tensor(409438420000.0, shape=(), dtype=float32)
tf.Tensor(409421250000.0, shape=(), dtype=float32)
tf.Tensor(409404050000.0, shape=(), dtype=float32)
tf.Tensor(409386980000.0, shape=(), dtype=float32)
tf.Tensor(409369700000.0, shape=(), dtype=float32)
tf.Tensor(409352730000.0, shape=(), dtype=float32)
tf.Tensor(409335500000.0, shape=(), dtype=float32)
tf.Tensor(409318460000.0, shape=(), dtype=float32)
tf.Tensor(409301320000.0, shape=(), dtype=float32)
tf.Tensor(409284250000.0, shape=(), dtype=float32)
tf.Tensor(409267100000.0, shape=(), dtype=float32)
tf.Tensor(409250000000.0, shape=(), dtype=float32)
tf.Tensor(409232900000.0, shape=(), dtype=float32)
tf.Tensor(409215860000.0, shape=(), dtype=float32)
tf.Tensor(409198660000.0, shape=(), dtype=float32)
tf.Tensor(409181620000.0, shape=(), dtype=float32)
tf.Tensor(409164580000.0, shape=(), dtype=float32)
tf.Tensor(409147470000.0, shape=(), dtype=float32)
tf.Tensor(409130470000.0, shape=(), dtype=float32)
tf.Tensor(409113430000.0, shape=(), dtype=float32)
tf.Tensor(409096300000.0, shape=(), dtype=float32)
tf.Tensor(409079250000.0, shape=(), dtype=float32)
tf.Tensor(409062150000.0, shape=(), dtype=float32)
tf.Tensor(409045170000.0, shape=(), dtype=float32)
tf.Tensor(409028130000.0, shape=(), dtype=float32)
tf.Tensor(409011100000.0, shape=(), dtype=float32)
tf.Tensor(408994050000.0, shape=(), dtype=float32)
tf.Tensor(408976980000.0, shape=(), dtype=float32)
tf.Tensor(408960040000.0, shape=(), dtype=float32)
tf.Tensor(408943000000.0, shape=(), dtype=float32)
tf.Tensor(408925960000.0, shape=(), dtype=float32)
tf.Tensor(408909000000.0, shape=(), dtype=float32)
tf.Tensor(408891980000.0, shape=(), dtype=float32)
tf.Tensor(408874980000.0, shape=(), dtype=float32)
tf.Tensor(408858000000.0, shape=(), dtype=float32)
tf.Tensor(408841030000.0, shape=(), dtype=float32)
tf.Tensor(408824020000.0, shape=(), dtype=float32)
tf.Tensor(408807080000.0, shape=(), dtype=float32)
tf.Tensor(408790100000.0, shape=(), dtype=float32)
tf.Tensor(408773170000.0, shape=(), dtype=float32)
tf.Tensor(408756200000.0, shape=(), dtype=float32)
tf.Tensor(408739200000.0, shape=(), dtype=float32)
tf.Tensor(408722280000.0, shape=(), dtype=float32)
tf.Tensor(408705400000.0, shape=(), dtype=float32)
tf.Tensor(408688360000.0, shape=(), dtype=float32)
tf.Tensor(408671450000.0, shape=(), dtype=float32)
tf.Tensor(408654500000.0, shape=(), dtype=float32)
tf.Tensor(408637570000.0, shape=(), dtype=float32)
tf.Tensor(408620630000.0, shape=(), dtype=float32)
tf.Tensor(408603800000.0, shape=(), dtype=float32)
tf.Tensor(408586850000.0, shape=(), dtype=float32)
tf.Tensor(408569970000.0, shape=(), dtype=float32)
tf.Tensor(408553030000.0, shape=(), dtype=float32)
tf.Tensor(408536120000.0, shape=(), dtype=float32)
tf.Tensor(408519280000.0, shape=(), dtype=float32)
tf.Tensor(408502300000.0, shape=(), dtype=float32)
tf.Tensor(408485500000.0, shape=(), dtype=float32)
tf.Tensor(408468600000.0, shape=(), dtype=float32)
tf.Tensor(408451840000.0, shape=(), dtype=float32)
tf.Tensor(408434840000.0, shape=(), dtype=float32)
tf.Tensor(408418000000.0, shape=(), dtype=float32)
tf.Tensor(408401120000.0, shape=(), dtype=float32)
tf.Tensor(408384340000.0, shape=(), dtype=float32)
tf.Tensor(408367430000.0, shape=(), dtype=float32)
tf.Tensor(408350600000.0, shape=(), dtype=float32)
tf.Tensor(408333780000.0, shape=(), dtype=float32)
tf.Tensor(408316940000.0, shape=(), dtype=float32)
tf.Tensor(408300100000.0, shape=(), dtype=float32)
tf.Tensor(408283300000.0, shape=(), dtype=float32)
tf.Tensor(408266500000.0, shape=(), dtype=float32)
tf.Tensor(408249660000.0, shape=(), dtype=float32)
tf.Tensor(408232900000.0, shape=(), dtype=float32)
tf.Tensor(408216000000.0, shape=(), dtype=float32)
tf.Tensor(408199270000.0, shape=(), dtype=float32)
tf.Tensor(408182520000.0, shape=(), dtype=float32)
tf.Tensor(408165680000.0, shape=(), dtype=float32)
tf.Tensor(408148870000.0, shape=(), dtype=float32)
tf.Tensor(408132160000.0, shape=(), dtype=float32)
tf.Tensor(408115350000.0, shape=(), dtype=float32)
tf.Tensor(408098540000.0, shape=(), dtype=float32)
tf.Tensor(408081800000.0, shape=(), dtype=float32)
tf.Tensor(408065000000.0, shape=(), dtype=float32)
tf.Tensor(408048340000.0, shape=(), dtype=float32)
tf.Tensor(408031530000.0, shape=(), dtype=float32)
tf.Tensor(408014750000.0, shape=(), dtype=float32)
tf.Tensor(407998070000.0, shape=(), dtype=float32)
tf.Tensor(407981300000.0, shape=(), dtype=float32)
tf.Tensor(407964600000.0, shape=(), dtype=float32)
tf.Tensor(407947870000.0, shape=(), dtype=float32)
tf.Tensor(407931100000.0, shape=(), dtype=float32)
tf.Tensor(407914400000.0, shape=(), dtype=float32)
tf.Tensor(407897740000.0, shape=(), dtype=float32)
tf.Tensor(407881000000.0, shape=(), dtype=float32)
tf.Tensor(407864300000.0, shape=(), dtype=float32)
tf.Tensor(407847630000.0, shape=(), dtype=float32)
tf.Tensor(407830900000.0, shape=(), dtype=float32)
tf.Tensor(407814240000.0, shape=(), dtype=float32)
tf.Tensor(407797560000.0, shape=(), dtype=float32)
tf.Tensor(407780850000.0, shape=(), dtype=float32)
tf.Tensor(407764200000.0, shape=(), dtype=float32)
tf.Tensor(407747500000.0, shape=(), dtype=float32)
tf.Tensor(407730800000.0, shape=(), dtype=float32)
tf.Tensor(407714170000.0, shape=(), dtype=float32)
tf.Tensor(407697560000.0, shape=(), dtype=float32)
tf.Tensor(407680880000.0, shape=(), dtype=float32)
tf.Tensor(407664230000.0, shape=(), dtype=float32)
tf.Tensor(407647650000.0, shape=(), dtype=float32)
tf.Tensor(407631000000.0, shape=(), dtype=float32)
tf.Tensor(407614360000.0, shape=(), dtype=float32)
tf.Tensor(407597680000.0, shape=(), dtype=float32)
tf.Tensor(407581070000.0, shape=(), dtype=float32)
tf.Tensor(407564500000.0, shape=(), dtype=float32)
tf.Tensor(407547900000.0, shape=(), dtype=float32)
tf.Tensor(407531300000.0, shape=(), dtype=float32)
tf.Tensor(407514700000.0, shape=(), dtype=float32)
tf.Tensor(407498100000.0, shape=(), dtype=float32)
tf.Tensor(407481500000.0, shape=(), dtype=float32)
tf.Tensor(407464900000.0, shape=(), dtype=float32)
tf.Tensor(407448320000.0, shape=(), dtype=float32)
tf.Tensor(407431740000.0, shape=(), dtype=float32)
tf.Tensor(407415160000.0, shape=(), dtype=float32)
tf.Tensor(407398580000.0, shape=(), dtype=float32)
tf.Tensor(407382030000.0, shape=(), dtype=float32)
tf.Tensor(407365550000.0, shape=(), dtype=float32)
tf.Tensor(407349040000.0, shape=(), dtype=float32)
tf.Tensor(407332420000.0, shape=(), dtype=float32)
tf.Tensor(407315840000.0, shape=(), dtype=float32)
tf.Tensor(407299300000.0, shape=(), dtype=float32)
tf.Tensor(407282750000.0, shape=(), dtype=float32)
tf.Tensor(407266300000.0, shape=(), dtype=float32)
tf.Tensor(407249700000.0, shape=(), dtype=float32)
tf.Tensor(407233230000.0, shape=(), dtype=float32)
tf.Tensor(407216700000.0, shape=(), dtype=float32)
tf.Tensor(407200200000.0, shape=(), dtype=float32)
tf.Tensor(407183620000.0, shape=(), dtype=float32)
tf.Tensor(407167200000.0, shape=(), dtype=float32)
tf.Tensor(407150700000.0, shape=(), dtype=float32)
tf.Tensor(407134240000.0, shape=(), dtype=float32)
tf.Tensor(407117730000.0, shape=(), dtype=float32)
tf.Tensor(407101200000.0, shape=(), dtype=float32)
tf.Tensor(407084760000.0, shape=(), dtype=float32)
tf.Tensor(407068380000.0, shape=(), dtype=float32)
tf.Tensor(407051830000.0, shape=(), dtype=float32)
tf.Tensor(407035380000.0, shape=(), dtype=float32)
tf.Tensor(407018900000.0, shape=(), dtype=float32)
tf.Tensor(407002450000.0, shape=(), dtype=float32)
tf.Tensor(406986000000.0, shape=(), dtype=float32)
tf.Tensor(406969580000.0, shape=(), dtype=float32)
tf.Tensor(406953100000.0, shape=(), dtype=float32)
tf.Tensor(406936680000.0, shape=(), dtype=float32)
tf.Tensor(406920270000.0, shape=(), dtype=float32)
tf.Tensor(406903850000.0, shape=(), dtype=float32)
tf.Tensor(406887400000.0, shape=(), dtype=float32)
tf.Tensor(406870980000.0, shape=(), dtype=float32)
tf.Tensor(406854570000.0, shape=(), dtype=float32)
tf.Tensor(406838200000.0, shape=(), dtype=float32)
tf.Tensor(406821770000.0, shape=(), dtype=float32)
tf.Tensor(406805400000.0, shape=(), dtype=float32)
tf.Tensor(406789030000.0, shape=(), dtype=float32)
tf.Tensor(406772600000.0, shape=(), dtype=float32)
tf.Tensor(406756230000.0, shape=(), dtype=float32)
tf.Tensor(406739880000.0, shape=(), dtype=float32)
tf.Tensor(406723460000.0, shape=(), dtype=float32)
tf.Tensor(406707080000.0, shape=(), dtype=float32)
tf.Tensor(406690730000.0, shape=(), dtype=float32)
tf.Tensor(406674440000.0, shape=(), dtype=float32)
tf.Tensor(406658020000.0, shape=(), dtype=float32)
tf.Tensor(406641740000.0, shape=(), dtype=float32)
tf.Tensor(406625400000.0, shape=(), dtype=float32)
tf.Tensor(406609040000.0, shape=(), dtype=float32)
tf.Tensor(406592700000.0, shape=(), dtype=float32)
tf.Tensor(406576370000.0, shape=(), dtype=float32)
tf.Tensor(406560050000.0, shape=(), dtype=float32)
tf.Tensor(406543730000.0, shape=(), dtype=float32)
tf.Tensor(406527440000.0, shape=(), dtype=float32)
tf.Tensor(406511060000.0, shape=(), dtype=float32)
tf.Tensor(406494800000.0, shape=(), dtype=float32)
tf.Tensor(406478550000.0, shape=(), dtype=float32)
tf.Tensor(406462230000.0, shape=(), dtype=float32)
tf.Tensor(406445950000.0, shape=(), dtype=float32)
tf.Tensor(406429630000.0, shape=(), dtype=float32)
tf.Tensor(406413340000.0, shape=(), dtype=float32)
tf.Tensor(406397100000.0, shape=(), dtype=float32)
tf.Tensor(406380800000.0, shape=(), dtype=float32)
tf.Tensor(406364520000.0, shape=(), dtype=float32)
tf.Tensor(406348300000.0, shape=(), dtype=float32)
tf.Tensor(406332000000.0, shape=(), dtype=float32)
tf.Tensor(406315800000.0, shape=(), dtype=float32)
tf.Tensor(406299480000.0, shape=(), dtype=float32)
tf.Tensor(406283320000.0, shape=(), dtype=float32)
tf.Tensor(406267040000.0, shape=(), dtype=float32)
tf.Tensor(406250780000.0, shape=(), dtype=float32)
tf.Tensor(406234600000.0, shape=(), dtype=float32)
tf.Tensor(406218340000.0, shape=(), dtype=float32)
tf.Tensor(406202100000.0, shape=(), dtype=float32)
tf.Tensor(406185900000.0, shape=(), dtype=float32)
tf.Tensor(406169650000.0, shape=(), dtype=float32)
tf.Tensor(406153530000.0, shape=(), dtype=float32)
tf.Tensor(406137300000.0, shape=(), dtype=float32)
tf.Tensor(406121120000.0, shape=(), dtype=float32)
tf.Tensor(406104930000.0, shape=(), dtype=float32)
tf.Tensor(406088680000.0, shape=(), dtype=float32)
tf.Tensor(406072560000.0, shape=(), dtype=float32)
tf.Tensor(406056340000.0, shape=(), dtype=float32)
tf.Tensor(406040250000.0, shape=(), dtype=float32)
tf.Tensor(406024060000.0, shape=(), dtype=float32)
tf.Tensor(406007840000.0, shape=(), dtype=float32)
tf.Tensor(405991720000.0, shape=(), dtype=float32)
tf.Tensor(405975560000.0, shape=(), dtype=float32)
tf.Tensor(405959440000.0, shape=(), dtype=float32)
tf.Tensor(405943220000.0, shape=(), dtype=float32)
tf.Tensor(405927070000.0, shape=(), dtype=float32)
tf.Tensor(405911000000.0, shape=(), dtype=float32)
tf.Tensor(405894820000.0, shape=(), dtype=float32)
tf.Tensor(405878730000.0, shape=(), dtype=float32)
tf.Tensor(405862650000.0, shape=(), dtype=float32)
tf.Tensor(405846520000.0, shape=(), dtype=float32)
tf.Tensor(405830340000.0, shape=(), dtype=float32)
tf.Tensor(405814200000.0, shape=(), dtype=float32)
tf.Tensor(405798130000.0, shape=(), dtype=float32)
tf.Tensor(405782100000.0, shape=(), dtype=float32)
tf.Tensor(405765950000.0, shape=(), dtype=float32)
tf.Tensor(405749860000.0, shape=(), dtype=float32)
tf.Tensor(405733870000.0, shape=(), dtype=float32)
tf.Tensor(405717680000.0, shape=(), dtype=float32)
tf.Tensor(405701620000.0, shape=(), dtype=float32)
tf.Tensor(405685600000.0, shape=(), dtype=float32)
1.0943303 1.0943485 1.0531795

4 用线性代数理解神经网络

参考 Hull (2019)

  1. low level matmul
  2. high level keras.layers.Dense

4.2 Define a Model

activation 函数可选

  1. sigmoid
  2. relu
  3. softmax

gradient descent optimizer 迭代器可选

  1. tf.keras.optimizers.SGD() - learning_rate
  2. tf.keras.optimizers.RMSprop() - learning_rate
  3. (Adaptive moment) tf.keras.optimizers.Adam() - learning_rate, beta1, beta2

4.3 线性代数实现最简单的模型

\[\text{products}=\left[ \begin{array}{ccc}{3} & {2} & {23} \\ {2} & {1} & {24} \\ {1} & {1} & {49} \\ {1} & {1} & {49} \\ {2} & {1} & {29}\end{array}\right] \left[ \begin{array}{c}{-1} \\ {-2} \\ {0.05}\end{array}\right]\]

这是假设的矩阵相乘,weights 假设已经训练好了。

[[-5.85      ]
 [-2.8       ]
 [-0.54999995]
 [-0.54999995]
 [-2.55      ]]
[[0.00287163]
 [0.05732417]
 [0.36586443]
 [0.36586443]
 [0.07242649]]

4.4 每一层的预测值

https://github.com/users/JiaxiangBU/projects/1#card-21795678

array([[ 3.,  2., 23.],
       [ 2.,  1., 24.],
       [ 1.,  1., 49.],
       [ 1.,  1., 49.],
       [ 2.,  1., 29.]], dtype=float32)

相当于一个5个样本,3个变量的样本。

array([[-1.  ],
       [-2.  ],
       [ 0.05]], dtype=float32)

3个变量所以三个权重。

[[9.9999982e-01 1.5407801e-05]
 [1.0000000e+00 4.7385693e-06]
 [1.0000000e+00 0.0000000e+00]
 [1.0000000e+00 0.0000000e+00]
 [1.0000000e+00 3.2116634e-07]]
[[0.31109148 0.63663745]
 [0.31109256 0.63663757]
 [0.311093   0.6366377 ]
 [0.311093   0.6366377 ]
 [0.31109297 0.6366377 ]]
[[0.68249834]
 [0.6824985 ]
 [0.6824986 ]
 [0.6824986 ]
 [0.6824986 ]]

这里可以查询按照初始化的权重进行时,每一层的预测情况。

虽然损失函数引入,每一次迭代,权重们都会修改。

5 训练模型

The compilation step in keras allows us to set the optimizer, loss function, and other useful training parameters in a single line of code.

其实就是最优化的过程。

Model: "sequential_8"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_33 (Dense)             (None, 32)                25120     
_________________________________________________________________
dense_34 (Dense)             (None, 16)                528       
_________________________________________________________________
dense_35 (Dense)             (None, 8)                 136       
_________________________________________________________________
dense_36 (Dense)             (None, 4)                 36        
=================================================================
Total params: 25,820
Trainable params: 25,820
Non-trainable params: 0
_________________________________________________________________
None
Int64Index([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9,
            ...
            775, 776, 777, 778, 779, 780, 781, 782, 783, 784],
           dtype='int64', length=785)
0 1 2 3 4 5 6 7 8 9 775 776 777 778 779 780 781 782 783 784
1 0 141 142 144 145 147 149 150 151 152 173 179 179 180 181 181 182 182 183 183
2 1 156 157 160 162 164 166 169 171 171 181 197 195 193 193 191 192 198 193 182
3 3 63 26 65 86 97 106 117 123 128 175 179 180 182 183 183 184 185 185 185
4 1 156 160 164 168 172 175 178 180 182 108 107 106 110 111 108 108 102 84 70

4 rows × 785 columns

参考 Kul (2019) 使用 pd.get_dummies 转录 y 变量。

<class ‘numpy.ndarray’> <class ‘numpy.ndarray’> (2000, 784) (2000, 4) <class ‘numpy.ndarray’> <class ‘numpy.ndarray’> (1600, 784) (1600, 4) <class ‘numpy.ndarray’> <class ‘numpy.ndarray’> (400, 784) (400, 4)

Train on 1440 samples, validate on 160 samples Epoch 1/100 1440/1440 [==============================] - 0s 62us/sample - loss: 0.0118 - accuracy: 1.0000 - val_loss: 0.0127 - val_accuracy: 1.0000 Epoch 2/100 1440/1440 [==============================] - 0s 48us/sample - loss: 0.0115 - accuracy: 1.0000 - val_loss: 0.0124 - val_accuracy: 1.0000 Epoch 3/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0112 - accuracy: 1.0000 - val_loss: 0.0121 - val_accuracy: 1.0000 Epoch 4/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0109 - accuracy: 1.0000 - val_loss: 0.0117 - val_accuracy: 1.0000 Epoch 5/100 1440/1440 [==============================] - 0s 42us/sample - loss: 0.0106 - accuracy: 1.0000 - val_loss: 0.0115 - val_accuracy: 1.0000 Epoch 6/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0104 - accuracy: 1.0000 - val_loss: 0.0112 - val_accuracy: 1.0000 Epoch 7/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0101 - accuracy: 1.0000 - val_loss: 0.0109 - val_accuracy: 1.0000 Epoch 8/100 1440/1440 [==============================] - 0s 54us/sample - loss: 0.0099 - accuracy: 1.0000 - val_loss: 0.0106 - val_accuracy: 1.0000 Epoch 9/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0096 - accuracy: 1.0000 - val_loss: 0.0104 - val_accuracy: 1.0000 Epoch 10/100 1440/1440 [==============================] - 0s 49us/sample - loss: 0.0094 - accuracy: 1.0000 - val_loss: 0.0101 - val_accuracy: 1.0000 Epoch 11/100 1440/1440 [==============================] - 0s 65us/sample - loss: 0.0092 - accuracy: 1.0000 - val_loss: 0.0099 - val_accuracy: 1.0000 Epoch 12/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0090 - accuracy: 1.0000 - val_loss: 0.0096 - val_accuracy: 1.0000 Epoch 13/100 1440/1440 [==============================] - 0s 52us/sample - loss: 0.0087 - accuracy: 1.0000 - val_loss: 0.0094 - val_accuracy: 1.0000 Epoch 14/100 1440/1440 [==============================] - 0s 56us/sample - loss: 0.0085 - accuracy: 1.0000 - val_loss: 0.0091 - val_accuracy: 1.0000 Epoch 15/100 1440/1440 [==============================] - 0s 83us/sample - loss: 0.0083 - accuracy: 1.0000 - val_loss: 0.0090 - val_accuracy: 1.0000 Epoch 16/100 1440/1440 [==============================] - 0s 63us/sample - loss: 0.0081 - accuracy: 1.0000 - val_loss: 0.0087 - val_accuracy: 1.0000 Epoch 17/100 1440/1440 [==============================] - 0s 62us/sample - loss: 0.0080 - accuracy: 1.0000 - val_loss: 0.0085 - val_accuracy: 1.0000 Epoch 18/100 1440/1440 [==============================] - 0s 51us/sample - loss: 0.0077 - accuracy: 1.0000 - val_loss: 0.0083 - val_accuracy: 1.0000 Epoch 19/100 1440/1440 [==============================] - 0s 46us/sample - loss: 0.0075 - accuracy: 1.0000 - val_loss: 0.0081 - val_accuracy: 1.0000 Epoch 20/100 1440/1440 [==============================] - 0s 49us/sample - loss: 0.0074 - accuracy: 1.0000 - val_loss: 0.0079 - val_accuracy: 1.0000 Epoch 21/100 1440/1440 [==============================] - 0s 60us/sample - loss: 0.0072 - accuracy: 1.0000 - val_loss: 0.0078 - val_accuracy: 1.0000 Epoch 22/100 1440/1440 [==============================] - 0s 55us/sample - loss: 0.0070 - accuracy: 1.0000 - val_loss: 0.0076 - val_accuracy: 1.0000 Epoch 23/100 1440/1440 [==============================] - 0s 51us/sample - loss: 0.0068 - accuracy: 1.0000 - val_loss: 0.0074 - val_accuracy: 1.0000 Epoch 24/100 1440/1440 [==============================] - 0s 64us/sample - loss: 0.0067 - accuracy: 1.0000 - val_loss: 0.0073 - val_accuracy: 1.0000 Epoch 25/100 1440/1440 [==============================] - 0s 52us/sample - loss: 0.0065 - accuracy: 1.0000 - val_loss: 0.0070 - val_accuracy: 1.0000 Epoch 26/100 1440/1440 [==============================] - 0s 58us/sample - loss: 0.0064 - accuracy: 1.0000 - val_loss: 0.0069 - val_accuracy: 1.0000 Epoch 27/100 1440/1440 [==============================] - 0s 66us/sample - loss: 0.0062 - accuracy: 1.0000 - val_loss: 0.0067 - val_accuracy: 1.0000 Epoch 28/100 1440/1440 [==============================] - 0s 58us/sample - loss: 0.0061 - accuracy: 1.0000 - val_loss: 0.0066 - val_accuracy: 1.0000 Epoch 29/100 1440/1440 [==============================] - 0s 59us/sample - loss: 0.0059 - accuracy: 1.0000 - val_loss: 0.0064 - val_accuracy: 1.0000 Epoch 30/100 1440/1440 [==============================] - 0s 56us/sample - loss: 0.0058 - accuracy: 1.0000 - val_loss: 0.0063 - val_accuracy: 1.0000 Epoch 31/100 1440/1440 [==============================] - 0s 55us/sample - loss: 0.0057 - accuracy: 1.0000 - val_loss: 0.0061 - val_accuracy: 1.0000 Epoch 32/100 1440/1440 [==============================] - 0s 59us/sample - loss: 0.0055 - accuracy: 1.0000 - val_loss: 0.0060 - val_accuracy: 1.0000 Epoch 33/100 1440/1440 [==============================] - 0s 53us/sample - loss: 0.0054 - accuracy: 1.0000 - val_loss: 0.0059 - val_accuracy: 1.0000 Epoch 34/100 1440/1440 [==============================] - 0s 54us/sample - loss: 0.0053 - accuracy: 1.0000 - val_loss: 0.0057 - val_accuracy: 1.0000 Epoch 35/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0052 - accuracy: 1.0000 - val_loss: 0.0056 - val_accuracy: 1.0000 Epoch 36/100 1440/1440 [==============================] - 0s 56us/sample - loss: 0.0051 - accuracy: 1.0000 - val_loss: 0.0055 - val_accuracy: 1.0000 Epoch 37/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0049 - accuracy: 1.0000 - val_loss: 0.0053 - val_accuracy: 1.0000 Epoch 38/100 1440/1440 [==============================] - 0s 48us/sample - loss: 0.0048 - accuracy: 1.0000 - val_loss: 0.0052 - val_accuracy: 1.0000 Epoch 39/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0047 - accuracy: 1.0000 - val_loss: 0.0051 - val_accuracy: 1.0000 Epoch 40/100 1440/1440 [==============================] - 0s 44us/sample - loss: 0.0046 - accuracy: 1.0000 - val_loss: 0.0050 - val_accuracy: 1.0000 Epoch 41/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0045 - accuracy: 1.0000 - val_loss: 0.0049 - val_accuracy: 1.0000 Epoch 42/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0044 - accuracy: 1.0000 - val_loss: 0.0047 - val_accuracy: 1.0000 Epoch 43/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0043 - accuracy: 1.0000 - val_loss: 0.0047 - val_accuracy: 1.0000 Epoch 44/100 1440/1440 [==============================] - 0s 36us/sample - loss: 0.0042 - accuracy: 1.0000 - val_loss: 0.0045 - val_accuracy: 1.0000 Epoch 45/100 1440/1440 [==============================] - 0s 42us/sample - loss: 0.0041 - accuracy: 1.0000 - val_loss: 0.0044 - val_accuracy: 1.0000 Epoch 46/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0040 - accuracy: 1.0000 - val_loss: 0.0044 - val_accuracy: 1.0000 Epoch 47/100 1440/1440 [==============================] - 0s 52us/sample - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.0043 - val_accuracy: 1.0000 Epoch 48/100 1440/1440 [==============================] - 0s 52us/sample - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.0042 - val_accuracy: 1.0000 Epoch 49/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0038 - accuracy: 1.0000 - val_loss: 0.0041 - val_accuracy: 1.0000 Epoch 50/100 1440/1440 [==============================] - 0s 44us/sample - loss: 0.0037 - accuracy: 1.0000 - val_loss: 0.0040 - val_accuracy: 1.0000 Epoch 51/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0036 - accuracy: 1.0000 - val_loss: 0.0039 - val_accuracy: 1.0000 Epoch 52/100 1440/1440 [==============================] - 0s 42us/sample - loss: 0.0035 - accuracy: 1.0000 - val_loss: 0.0038 - val_accuracy: 1.0000 Epoch 53/100 1440/1440 [==============================] - 0s 41us/sample - loss: 0.0034 - accuracy: 1.0000 - val_loss: 0.0037 - val_accuracy: 1.0000 Epoch 54/100 1440/1440 [==============================] - 0s 46us/sample - loss: 0.0034 - accuracy: 1.0000 - val_loss: 0.0036 - val_accuracy: 1.0000 Epoch 55/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0033 - accuracy: 1.0000 - val_loss: 0.0036 - val_accuracy: 1.0000 Epoch 56/100 1440/1440 [==============================] - 0s 52us/sample - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.0036 - val_accuracy: 1.0000 Epoch 57/100 1440/1440 [==============================] - 0s 59us/sample - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.0034 - val_accuracy: 1.0000 Epoch 58/100 1440/1440 [==============================] - 0s 60us/sample - loss: 0.0031 - accuracy: 1.0000 - val_loss: 0.0033 - val_accuracy: 1.0000 Epoch 59/100 1440/1440 [==============================] - 0s 56us/sample - loss: 0.0030 - accuracy: 1.0000 - val_loss: 0.0033 - val_accuracy: 1.0000 Epoch 60/100 1440/1440 [==============================] - 0s 52us/sample - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0032 - val_accuracy: 1.0000 Epoch 61/100 1440/1440 [==============================] - 0s 55us/sample - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0031 - val_accuracy: 1.0000 Epoch 62/100 1440/1440 [==============================] - 0s 56us/sample - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.0031 - val_accuracy: 1.0000 Epoch 63/100 1440/1440 [==============================] - 0s 53us/sample - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.0030 - val_accuracy: 1.0000 Epoch 64/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0027 - accuracy: 1.0000 - val_loss: 0.0029 - val_accuracy: 1.0000 Epoch 65/100 1440/1440 [==============================] - 0s 48us/sample - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.0028 - val_accuracy: 1.0000 Epoch 66/100 1440/1440 [==============================] - 0s 47us/sample - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.0028 - val_accuracy: 1.0000 Epoch 67/100 1440/1440 [==============================] - 0s 40us/sample - loss: 0.0025 - accuracy: 1.0000 - val_loss: 0.0027 - val_accuracy: 1.0000 Epoch 68/100 1440/1440 [==============================] - 0s 39us/sample - loss: 0.0025 - accuracy: 1.0000 - val_loss: 0.0027 - val_accuracy: 1.0000 Epoch 69/100 1440/1440 [==============================] - 0s 44us/sample - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.0026 - val_accuracy: 1.0000 Epoch 70/100 1440/1440 [==============================] - 0s 37us/sample - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.0026 - val_accuracy: 1.0000 Epoch 71/100 1440/1440 [==============================] - 0s 40us/sample - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0025 - val_accuracy: 1.0000 Epoch 72/100 1440/1440 [==============================] - 0s 42us/sample - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0024 - val_accuracy: 1.0000 Epoch 73/100 1440/1440 [==============================] - 0s 38us/sample - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.0024 - val_accuracy: 1.0000 Epoch 74/100 1440/1440 [==============================] - 0s 38us/sample - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.0023 - val_accuracy: 1.0000 Epoch 75/100 1440/1440 [==============================] - 0s 42us/sample - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.0023 - val_accuracy: 1.0000 Epoch 76/100 1440/1440 [==============================] - 0s 42us/sample - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.0022 - val_accuracy: 1.0000 Epoch 77/100 1440/1440 [==============================] - 0s 44us/sample - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.0022 - val_accuracy: 1.0000 Epoch 78/100 1440/1440 [==============================] - 0s 44us/sample - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.0022 - val_accuracy: 1.0000 Epoch 79/100 1440/1440 [==============================] - 0s 46us/sample - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.0021 - val_accuracy: 1.0000 Epoch 80/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.0021 - val_accuracy: 1.0000 Epoch 81/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0019 - accuracy: 1.0000 - val_loss: 0.0020 - val_accuracy: 1.0000 Epoch 82/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.0020 - val_accuracy: 1.0000 Epoch 83/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.0019 - val_accuracy: 1.0000 Epoch 84/100 1440/1440 [==============================] - 0s 57us/sample - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0019 - val_accuracy: 1.0000 Epoch 85/100 1440/1440 [==============================] - 0s 51us/sample - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0018 - val_accuracy: 1.0000 Epoch 86/100 1440/1440 [==============================] - 0s 58us/sample - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0018 - val_accuracy: 1.0000 Epoch 87/100 1440/1440 [==============================] - 0s 47us/sample - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.0018 - val_accuracy: 1.0000 Epoch 88/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.0017 - val_accuracy: 1.0000 Epoch 89/100 1440/1440 [==============================] - 0s 55us/sample - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.0017 - val_accuracy: 1.0000 Epoch 90/100 1440/1440 [==============================] - 0s 54us/sample - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.0017 - val_accuracy: 1.0000 Epoch 91/100 1440/1440 [==============================] - 0s 48us/sample - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.0016 - val_accuracy: 1.0000 Epoch 92/100 1440/1440 [==============================] - 0s 48us/sample - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.0016 - val_accuracy: 1.0000 Epoch 93/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0015 - val_accuracy: 1.0000 Epoch 94/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0015 - val_accuracy: 1.0000 Epoch 95/100 1440/1440 [==============================] - 0s 43us/sample - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0015 - val_accuracy: 1.0000 Epoch 96/100 1440/1440 [==============================] - 0s 45us/sample - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.0015 - val_accuracy: 1.0000 Epoch 97/100 1440/1440 [==============================] - 0s 51us/sample - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.0014 - val_accuracy: 1.0000 Epoch 98/100 1440/1440 [==============================] - 0s 53us/sample - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.0014 - val_accuracy: 1.0000 Epoch 99/100 1440/1440 [==============================] - 0s 51us/sample - loss: 0.0013 - accuracy: 1.0000 - val_loss: 0.0014 - val_accuracy: 1.0000 Epoch 100/100 1440/1440 [==============================] - 0s 50us/sample - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.0013 - val_accuracy: 1.0000

<tensorflow.python.keras.callbacks.History at 0x1a370aea58>

validation_split 这样直接一点,方便一点。

You may have noticed that your model performed quite well. In just 10 epochs, we achieved a classification accuracy of around 98% in the validation sample!

这还真是。

400/400 [==============================] - 0s 34us/sample - loss: 0.0012 - accuracy: 1.0000

[0.0011529619945213198, 1.0]

6 保存和调用模型

参考 Géron (2019)

测试

compile(), fit(), evaluate() and predict()

Model: “sequential_8” _________________________________________________________________ Layer (type) Output Shape Param #
================================================================= dense_33 (Dense) (None, 32) 25120
_________________________________________________________________ dense_34 (Dense) (None, 16) 528
_________________________________________________________________ dense_35 (Dense) (None, 8) 136
_________________________________________________________________ dense_36 (Dense) (None, 4) 36
================================================================= Total params: 25,820 Trainable params: 25,820 Non-trainable params: 0 _________________________________________________________________

2000/2000 [==============================] - 0s 51us/sample - loss: 0.0012 - accuracy: 1.0000

[0.0012143446169793606, 1.0]

array([[9.9889600e-01, 1.3288877e-06, 1.1027037e-03, 1.0441600e-11], [1.0800778e-03, 9.9525696e-01, 8.5714436e-04, 2.8056798e-03], [6.5598740e-09, 3.6293358e-07, 2.9892378e-04, 9.9970078e-01], [1.0800778e-03, 9.9525696e-01, 8.5714436e-04, 2.8056798e-03]], dtype=float32)

array([[9.9999487e-01, 4.3370420e-07, 4.6273562e-06, 2.3267423e-16], [1.0800778e-03, 9.9525696e-01, 8.5714436e-04, 2.8056798e-03], [1.6675259e-07, 9.9770224e-23, 9.9999964e-01, 2.0717891e-07], [3.3840766e-10, 1.1746530e-08, 9.4815652e-05, 9.9990511e-01], [2.5084819e-07, 2.2120622e-24, 9.9999976e-01, 7.6603044e-09]], dtype=float32)

7 使用 Keras API

2.0.0-alpha0 2.2.4-tf

按照 Keras API 的写法,不使用

Model: “model_1” _________________________________________________________________ Layer (type) Output Shape Param #
================================================================= input_3 (InputLayer) [(None, 1)] 0
_________________________________________________________________ dense_1 (Dense) (None, 1) 2
================================================================= Total params: 2 Trainable params: 2 Non-trainable params: 0 _________________________________________________________________

array([‘season’, ‘team_1’, ‘team_2’, ‘home’, ‘seed_diff’, ‘score_diff’, ‘score_1’, ‘score_2’, ‘won’], dtype=object)

Train on 2552 samples, validate on 284 samples Epoch 1/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.2874 - val_loss: 8.9423 Epoch 2/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.2761 - val_loss: 8.9353 Epoch 3/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.2641 - val_loss: 8.9288 Epoch 4/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.2533 - val_loss: 8.9234 Epoch 5/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.2444 - val_loss: 8.9187 Epoch 6/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.2370 - val_loss: 8.9138 Epoch 7/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.2296 - val_loss: 8.9110 Epoch 8/100 2552/2552 [==============================] - 0s 10us/sample - loss: 9.2241 - val_loss: 8.9100 Epoch 9/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.2193 - val_loss: 8.9097 Epoch 10/100 2552/2552 [==============================] - 0s 10us/sample - loss: 9.2147 - val_loss: 8.9093 Epoch 11/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.2099 - val_loss: 8.9089 Epoch 12/100 2552/2552 [==============================] - 0s 14us/sample - loss: 9.2056 - val_loss: 8.9087 Epoch 13/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.2015 - val_loss: 8.9087 Epoch 14/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1979 - val_loss: 8.9088 Epoch 15/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1944 - val_loss: 8.9090 Epoch 16/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1908 - val_loss: 8.9096 Epoch 17/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1881 - val_loss: 8.9103 Epoch 18/100 2552/2552 [==============================] - 0s 14us/sample - loss: 9.1859 - val_loss: 8.9109 Epoch 19/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1836 - val_loss: 8.9116 Epoch 20/100 2552/2552 [==============================] - 0s 16us/sample - loss: 9.1820 - val_loss: 8.9127 Epoch 21/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1801 - val_loss: 8.9139 Epoch 22/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1788 - val_loss: 8.9157 Epoch 23/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1771 - val_loss: 8.9166 Epoch 24/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1759 - val_loss: 8.9181 Epoch 25/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1746 - val_loss: 8.9197 Epoch 26/100 2552/2552 [==============================] - 0s 16us/sample - loss: 9.1731 - val_loss: 8.9207 Epoch 27/100 2552/2552 [==============================] - 0s 18us/sample - loss: 9.1724 - val_loss: 8.9221 Epoch 28/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.1720 - val_loss: 8.9232 Epoch 29/100 2552/2552 [==============================] - 0s 18us/sample - loss: 9.1713 - val_loss: 8.9238 Epoch 30/100 2552/2552 [==============================] - 0s 16us/sample - loss: 9.1709 - val_loss: 8.9246 Epoch 31/100 2552/2552 [==============================] - 0s 22us/sample - loss: 9.1704 - val_loss: 8.9256 Epoch 32/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.1700 - val_loss: 8.9262 Epoch 33/100 2552/2552 [==============================] - 0s 20us/sample - loss: 9.1696 - val_loss: 8.9269 Epoch 34/100 2552/2552 [==============================] - 0s 14us/sample - loss: 9.1693 - val_loss: 8.9277 Epoch 35/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.1691 - val_loss: 8.9283 Epoch 36/100 2552/2552 [==============================] - 0s 29us/sample - loss: 9.1689 - val_loss: 8.9288 Epoch 37/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1688 - val_loss: 8.9301 Epoch 38/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1685 - val_loss: 8.9309 Epoch 39/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.1682 - val_loss: 8.9313 Epoch 40/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1680 - val_loss: 8.9322 Epoch 41/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1678 - val_loss: 8.9329 Epoch 42/100 2552/2552 [==============================] - 0s 14us/sample - loss: 9.1676 - val_loss: 8.9331 Epoch 43/100 2552/2552 [==============================] - 0s 10us/sample - loss: 9.1674 - val_loss: 8.9337 Epoch 44/100 2552/2552 [==============================] - 0s 10us/sample - loss: 9.1675 - val_loss: 8.9354 Epoch 45/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1669 - val_loss: 8.9355 Epoch 46/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1669 - val_loss: 8.9355 Epoch 47/100 2552/2552 [==============================] - 0s 15us/sample - loss: 9.1668 - val_loss: 8.9358 Epoch 48/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1668 - val_loss: 8.9366 Epoch 49/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1666 - val_loss: 8.9369 Epoch 50/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1666 - val_loss: 8.9364 Epoch 51/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1666 - val_loss: 8.9375 Epoch 52/100 2552/2552 [==============================] - 0s 12us/sample - loss: 9.1664 - val_loss: 8.9375 Epoch 53/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1663 - val_loss: 8.9377 Epoch 54/100 2552/2552 [==============================] - 0s 13us/sample - loss: 9.1661 - val_loss: 8.9376 Epoch 55/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1662 - val_loss: 8.9376 Epoch 56/100 2552/2552 [==============================] - 0s 14us/sample - loss: 9.1664 - val_loss: 8.9393 Epoch 57/100 2552/2552 [==============================] - 0s 11us/sample - loss: 9.1658 - val_loss: 8.9389 Epoch 58/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1658 - val_loss: 8.9386 Epoch 59/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1656 - val_loss: 8.9394 Epoch 60/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1656 - val_loss: 8.9396 Epoch 61/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1655 - val_loss: 8.9397 Epoch 62/100 2552/2552 [==============================] - 0s 7us/sample - loss: 9.1655 - val_loss: 8.9395 Epoch 63/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1654 - val_loss: 8.9398 Epoch 64/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1652 - val_loss: 8.9402 Epoch 65/100 2552/2552 [==============================] - 0s 7us/sample - loss: 9.1652 - val_loss: 8.9403 Epoch 66/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1652 - val_loss: 8.9408 Epoch 67/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1651 - val_loss: 8.9409 Epoch 68/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1651 - val_loss: 8.9416 Epoch 69/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1650 - val_loss: 8.9410 Epoch 70/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1650 - val_loss: 8.9416 Epoch 71/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1649 - val_loss: 8.9417 Epoch 72/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1648 - val_loss: 8.9422 Epoch 73/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1647 - val_loss: 8.9426 Epoch 74/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1649 - val_loss: 8.9421 Epoch 75/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1646 - val_loss: 8.9429 Epoch 76/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1645 - val_loss: 8.9430 Epoch 77/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1645 - val_loss: 8.9434 Epoch 78/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1644 - val_loss: 8.9439 Epoch 79/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1644 - val_loss: 8.9444 Epoch 80/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1644 - val_loss: 8.9435 Epoch 81/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1644 - val_loss: 8.9444 Epoch 82/100 2552/2552 [==============================] - 0s 10us/sample - loss: 9.1643 - val_loss: 8.9436 Epoch 83/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1642 - val_loss: 8.9440 Epoch 84/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1643 - val_loss: 8.9439 Epoch 85/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1644 - val_loss: 8.9448 Epoch 86/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1641 - val_loss: 8.9443 Epoch 87/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1643 - val_loss: 8.9439 Epoch 88/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1641 - val_loss: 8.9451 Epoch 89/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1642 - val_loss: 8.9452 Epoch 90/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1643 - val_loss: 8.9461 Epoch 91/100 2552/2552 [==============================] - 0s 9us/sample - loss: 9.1641 - val_loss: 8.9455 Epoch 92/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1640 - val_loss: 8.9453 Epoch 93/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1640 - val_loss: 8.9450 Epoch 94/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1640 - val_loss: 8.9460 Epoch 95/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1643 - val_loss: 8.9454 Epoch 96/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1640 - val_loss: 8.9460 Epoch 97/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1640 - val_loss: 8.9458 Epoch 98/100 2552/2552 [==============================] - 0s 8us/sample - loss: 9.1640 - val_loss: 8.9468 Epoch 99/100 2552/2552 [==============================] - 0s 7us/sample - loss: 9.1645 - val_loss: 8.9453 Epoch 100/100 2552/2552 [==============================] - 0s 7us/sample - loss: 9.1638 - val_loss: 8.9463

1398/1398 [==============================] - 0s 15us/sample - loss: 9.2469

9.246888297821831

png

png

8 预测多个 y 变量

参考 Deane-Mayer (2019)Deane-Mayer (2018)

8.1 并联

这里产生两个 y 变量,因此 keras.layers.Dense(2)(input_tensor) 并且这里的 ys 是并列结构。

并列结构要求 loss 必须相同,否则无法梯度下降。

[‘season’ ‘team_1’ ‘team_2’ ‘home’ ‘seed_diff’ ‘score_diff’ ‘score_1’ ‘score_2’ ‘won’ ‘pred’] (4234, 10)

The input layer will have 4 weights: 2 for each input times 2 for each output.

The output layer will have 2 weights, one for each output.

[array([[ 1.4169061, 1.1647756], [-0.7371949, -0.0713665]], dtype=float32), array([0.20000015, 0.20000015], dtype=float32)]

Fit a model with two outputs | Python 没有作出例子中那么好的效果。 目前提高 epochs。

png

png

8.2 串联

现在我对粉丝用户打上了是否周活的一个预测分。 但是周活中其实还要细分是否是活跃中的高活。

上次说可以分为高中低来进行就好了,但是从业务逻辑上,应该是先预测是否周活,在预测周活是否高活。 因此是串联预测的。 我最近看了一个 TensorFlow 的例子,可以按照这样串联的方式进行建模,并且输出两次 y 的预测值。 具体见实例。

Now you will create a different kind of 2-output model. This time, you will predict the score difference, instead of both team’s scores and then you will predict the probability that team 1 won the game. This is a pretty cool model: it is going to do both classification and regression! (Deane-Mayer 2019)

这是一个串联结构,先预测 \(y^{(1)}\),然后根据 \(\hat y^{(1)}\) 预测 \(y^{(2)}\)

可以感受到神经网络建模,自由度高,不用太受限于大多数机器学习的假设。

Now you will create a different kind of 2-output model. This time, you will predict the score difference, instead of both team’s scores and then you will predict the probability that team 1 won the game. This is a pretty cool model: it is going to do both classification and regression!

这是一个串联结构,先预测 \(y^{(1)}\),然后根据 \(\hat y^{(1)}\) 预测 \(y^{(2)}\)。 例如,先预测一个用户是否活跃,再预测用户是高活跃还是低活跃。

  1. output_tensor_1\(y^{(1)}\)
  2. output_tensor_2\(y^{(2)}\)

The regression problem is easier than the classification problem because MAE punishes the model less for a loss due to random chance. For example, if score_diff is -1 and won is 0, that means team_1 had some bad luck and lost by a single free throw. The data for the easy problem helps the model find a solution to the hard problem.

这里以篮球赛为例,考虑主客场两个球队,预测分差,再预测胜负决定。例如, 分差小,产生的胜负,也就是惜败的情况。

这里的 pred 作为预测值也有道理,这是衡量 1v1 的竞争力指标。

[‘season’ ‘team_1’ ‘team_2’ ‘home’ ‘seed_diff’ ‘score_diff’ ‘score_1’ ‘score_2’ ‘won’ ‘pred’] (4234, 10)

Train on 3087 samples, validate on 343 samples Epoch 1/10 3087/3087 [==============================] - 0s 100us/sample - loss: 16.3097 - dense_loss: 14.4875 - dense_1_loss: 1.8222 - val_loss: 15.1597 - val_dense_loss: 13.3774 - val_dense_1_loss: 1.7823 Epoch 2/10 3087/3087 [==============================] - 0s 2us/sample - loss: 16.1590 - dense_loss: 14.3649 - dense_1_loss: 1.7941 - val_loss: 15.0096 - val_dense_loss: 13.2584 - val_dense_1_loss: 1.7512 Epoch 3/10 3087/3087 [==============================] - 0s 2us/sample - loss: 16.0107 - dense_loss: 14.2432 - dense_1_loss: 1.7674 - val_loss: 14.8613 - val_dense_loss: 13.1397 - val_dense_1_loss: 1.7216 Epoch 4/10 3087/3087 [==============================] - 0s 2us/sample - loss: 15.8649 - dense_loss: 14.1223 - dense_1_loss: 1.7426 - val_loss: 14.7160 - val_dense_loss: 13.0221 - val_dense_1_loss: 1.6939 Epoch 5/10 3087/3087 [==============================] - 0s 2us/sample - loss: 15.7218 - dense_loss: 14.0020 - dense_1_loss: 1.7198 - val_loss: 14.5734 - val_dense_loss: 12.9051 - val_dense_1_loss: 1.6683 Epoch 6/10 3087/3087 [==============================] - 0s 3us/sample - loss: 15.5820 - dense_loss: 13.8826 - dense_1_loss: 1.6993 - val_loss: 14.4337 - val_dense_loss: 12.7883 - val_dense_1_loss: 1.6454 Epoch 7/10 3087/3087 [==============================] - 0s 2us/sample - loss: 15.4454 - dense_loss: 13.7639 - dense_1_loss: 1.6814 - val_loss: 14.2977 - val_dense_loss: 12.6722 - val_dense_1_loss: 1.6255 Epoch 8/10 3087/3087 [==============================] - 0s 2us/sample - loss: 15.3123 - dense_loss: 13.6460 - dense_1_loss: 1.6663 - val_loss: 14.1676 - val_dense_loss: 12.5587 - val_dense_1_loss: 1.6089 Epoch 9/10 3087/3087 [==============================] - 0s 3us/sample - loss: 15.1833 - dense_loss: 13.5290 - dense_1_loss: 1.6543 - val_loss: 14.0428 - val_dense_loss: 12.4470 - val_dense_1_loss: 1.5958 Epoch 10/10 3087/3087 [==============================] - 0s 2us/sample - loss: 15.0588 - dense_loss: 13.4132 - dense_1_loss: 1.6456 - val_loss: 13.9223 - val_dense_loss: 12.3361 - val_dense_1_loss: 1.5862

<tensorflow.python.keras.callbacks.History at 0x1a34811be0>

[array([[-1.2418715], [ 0.5748157]], dtype=float32), array([[-1.326097]], dtype=float32)]

804/804 [==============================] - 0s 32us/sample - loss: 14.9233 - dense_loss: 13.2662 - dense_1_loss: 1.6570

[14.92325133233521, 13.266212, 1.6570392]

9 ID Embedding

9.1 简介

embedding 我常用的方式是降维分类变量,比如一些 user_id, item_id 之类的变量,分类数据已经超过了50个,这时候做 one-hot 后,模型效率很低 (当然也可以 integer embedding) 但是这是一种有争议的方式。

下面提供一个应用场景和可复现的代码。

假设一个甲级联赛,有很多的队伍,有些打过,有些没碰面过,有他们的差分、胜负情况,目前需要聚合一个指标来衡量他们的竞争力,以此来预测他们的未来相对时候的胜负情况。

2.0.0 2.2.4-tf

array([‘season’, ‘team_1’, ‘team_2’, ‘home’, ‘score_diff’, ‘score_1’, ‘score_2’, ‘won’], dtype=object)

10888

Embedding 类似于 Dense 的用法。 team_lookup 是一个可以复用的 layers。

Now that you’ve looked up how “strong” each team is, subtract the team strengths to determine which team is expected to win the game. But rather than using seed differences to predict score differences, you’ll use the difference of your own team strength model to predict score differences.

Model: “model” __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to
================================================================================================== input_2 (InputLayer) [(None, 1)] 0
__________________________________________________________________________________________________ input_3 (InputLayer) [(None, 1)] 0
__________________________________________________________________________________________________ Team-Strength-Model (Model) (None, 1) 10888 input_2[0][0]
input_3[0][0]
__________________________________________________________________________________________________ subtract (Subtract) (None, 1) 0 Team-Strength-Model[1][0]
Team-Strength-Model[2][0]
================================================================================================== Total params: 10,888 Trainable params: 10,888 Non-trainable params: 0 __________________________________________________________________________________________________

png

png

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 280960 samples, validate on 31218 samples Epoch 1/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.8875 - val_loss: 11.6657 Epoch 2/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.8644 - val_loss: 11.6485 Epoch 3/100 280960/280960 [==============================] - 1s 2us/sample - loss: 11.8413 - val_loss: 11.6312 Epoch 4/100 280960/280960 [==============================] - 1s 2us/sample - loss: 11.8183 - val_loss: 11.6139 Epoch 5/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.7955 - val_loss: 11.5967 Epoch 6/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.7728 - val_loss: 11.5794 Epoch 7/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.7502 - val_loss: 11.5622 Epoch 8/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.7277 - val_loss: 11.5451 Epoch 9/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.7053 - val_loss: 11.5281 Epoch 10/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.6831 - val_loss: 11.5111 Epoch 11/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.6611 - val_loss: 11.4941 Epoch 12/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.6392 - val_loss: 11.4772 Epoch 13/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.6174 - val_loss: 11.4604 Epoch 14/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.5957 - val_loss: 11.4437 Epoch 15/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.5742 - val_loss: 11.4271 Epoch 16/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.5527 - val_loss: 11.4104 Epoch 17/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.5315 - val_loss: 11.3938 Epoch 18/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.5104 - val_loss: 11.3773 Epoch 19/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.4894 - val_loss: 11.3608 Epoch 20/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.4685 - val_loss: 11.3444 Epoch 21/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.4478 - val_loss: 11.3280 Epoch 22/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.4273 - val_loss: 11.3119 Epoch 23/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.4069 - val_loss: 11.2957 Epoch 24/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.3866 - val_loss: 11.2796 Epoch 25/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.3665 - val_loss: 11.2635 Epoch 26/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.3466 - val_loss: 11.2476 Epoch 27/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.3267 - val_loss: 11.2316 Epoch 28/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.3071 - val_loss: 11.2158 Epoch 29/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.2875 - val_loss: 11.2001 Epoch 30/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.2681 - val_loss: 11.1844 Epoch 31/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.2488 - val_loss: 11.1689 Epoch 32/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.2298 - val_loss: 11.1533 Epoch 33/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.2108 - val_loss: 11.1379 Epoch 34/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.1919 - val_loss: 11.1224 Epoch 35/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.1732 - val_loss: 11.1072 Epoch 36/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.1547 - val_loss: 11.0918 Epoch 37/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.1362 - val_loss: 11.0767 Epoch 38/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.1180 - val_loss: 11.0617 Epoch 39/100 280960/280960 [==============================] - 1s 2us/sample - loss: 11.0998 - val_loss: 11.0467 Epoch 40/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.0818 - val_loss: 11.0317 Epoch 41/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.0639 - val_loss: 11.0169 Epoch 42/100 280960/280960 [==============================] - 0s 2us/sample - loss: 11.0462 - val_loss: 11.0021 Epoch 43/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.0286 - val_loss: 10.9874 Epoch 44/100 280960/280960 [==============================] - 0s 1us/sample - loss: 11.0112 - val_loss: 10.9728 Epoch 45/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.9939 - val_loss: 10.9582 Epoch 46/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.9767 - val_loss: 10.9437 Epoch 47/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.9597 - val_loss: 10.9293 Epoch 48/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.9427 - val_loss: 10.9149 Epoch 49/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.9259 - val_loss: 10.9007 Epoch 50/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.9092 - val_loss: 10.8866 Epoch 51/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.8926 - val_loss: 10.8725 Epoch 52/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.8762 - val_loss: 10.8585 Epoch 53/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.8598 - val_loss: 10.8446 Epoch 54/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.8436 - val_loss: 10.8308 Epoch 55/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.8275 - val_loss: 10.8170 Epoch 56/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.8115 - val_loss: 10.8034 Epoch 57/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.7957 - val_loss: 10.7898 Epoch 58/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.7800 - val_loss: 10.7763 Epoch 59/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.7644 - val_loss: 10.7629 Epoch 60/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.7489 - val_loss: 10.7495 Epoch 61/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.7335 - val_loss: 10.7364 Epoch 62/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.7182 - val_loss: 10.7231 Epoch 63/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.7030 - val_loss: 10.7100 Epoch 64/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.6880 - val_loss: 10.6970 Epoch 65/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.6730 - val_loss: 10.6839 Epoch 66/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.6582 - val_loss: 10.6710 Epoch 67/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.6435 - val_loss: 10.6582 Epoch 68/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.6288 - val_loss: 10.6455 Epoch 69/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.6143 - val_loss: 10.6327 Epoch 70/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.5999 - val_loss: 10.6202 Epoch 71/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.5856 - val_loss: 10.6076 Epoch 72/100 280960/280960 [==============================] - 1s 2us/sample - loss: 10.5714 - val_loss: 10.5950 Epoch 73/100 280960/280960 [==============================] - 1s 2us/sample - loss: 10.5573 - val_loss: 10.5826 Epoch 74/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.5433 - val_loss: 10.5703 Epoch 75/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.5295 - val_loss: 10.5581 Epoch 76/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.5157 - val_loss: 10.5458 Epoch 77/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.5020 - val_loss: 10.5337 Epoch 78/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.4885 - val_loss: 10.5216 Epoch 79/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.4750 - val_loss: 10.5096 Epoch 80/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.4616 - val_loss: 10.4977 Epoch 81/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.4483 - val_loss: 10.4858 Epoch 82/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.4351 - val_loss: 10.4740 Epoch 83/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.4220 - val_loss: 10.4622 Epoch 84/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.4090 - val_loss: 10.4506 Epoch 85/100 280960/280960 [==============================] - 1s 2us/sample - loss: 10.3961 - val_loss: 10.4390 Epoch 86/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.3833 - val_loss: 10.4273 Epoch 87/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.3706 - val_loss: 10.4159 Epoch 88/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.3580 - val_loss: 10.4045 Epoch 89/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.3455 - val_loss: 10.3932 Epoch 90/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.3330 - val_loss: 10.3820 Epoch 91/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.3207 - val_loss: 10.3709 Epoch 92/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.3085 - val_loss: 10.3598 Epoch 93/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.2963 - val_loss: 10.3488 Epoch 94/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.2843 - val_loss: 10.3379 Epoch 95/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.2723 - val_loss: 10.3270 Epoch 96/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.2605 - val_loss: 10.3162 Epoch 97/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.2487 - val_loss: 10.3054 Epoch 98/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.2370 - val_loss: 10.2947 Epoch 99/100 280960/280960 [==============================] - 0s 2us/sample - loss: 10.2254 - val_loss: 10.2840 Epoch 100/100 280960/280960 [==============================] - 0s 1us/sample - loss: 10.2139 - val_loss: 10.2734

这是一个较常见的问题,当 input 是一个 int 多分类变量时,如果不是连续的,中间出现了跳变,比如一共150个分类,结果一个id=3000,超出这个范围了。

还有这里最好是 index=0 为基础,因为[0, 2150)说明有2150个分类,从0开始一直到2150-1,)说明是开区间。

png

png

大概 60 轮是最佳。

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 280960 samples, validate on 31218 samples Epoch 1/60 280960/280960 [==============================] - 1s 2us/sample - loss: 10.2025 - val_loss: 10.2629 Epoch 2/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.1911 - val_loss: 10.2525 Epoch 3/60 280960/280960 [==============================] - 1s 2us/sample - loss: 10.1798 - val_loss: 10.2421 Epoch 4/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.1686 - val_loss: 10.2318 Epoch 5/60 280960/280960 [==============================] - 0s 2us/sample - loss: 10.1575 - val_loss: 10.2215 Epoch 6/60 280960/280960 [==============================] - 0s 2us/sample - loss: 10.1465 - val_loss: 10.2113 Epoch 7/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.1355 - val_loss: 10.2011 Epoch 8/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.1246 - val_loss: 10.1910 Epoch 9/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.1138 - val_loss: 10.1809 Epoch 10/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.1030 - val_loss: 10.1708 Epoch 11/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0923 - val_loss: 10.1609 Epoch 12/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0818 - val_loss: 10.1509 Epoch 13/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0712 - val_loss: 10.1411 Epoch 14/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0608 - val_loss: 10.1313 Epoch 15/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0504 - val_loss: 10.1216 Epoch 16/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0401 - val_loss: 10.1119 Epoch 17/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0299 - val_loss: 10.1023 Epoch 18/60 280960/280960 [==============================] - 0s 2us/sample - loss: 10.0198 - val_loss: 10.0927 Epoch 19/60 280960/280960 [==============================] - 0s 1us/sample - loss: 10.0097 - val_loss: 10.0832 Epoch 20/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.9997 - val_loss: 10.0738 Epoch 21/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.9897 - val_loss: 10.0645 Epoch 22/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.9799 - val_loss: 10.0551 Epoch 23/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.9701 - val_loss: 10.0459 Epoch 24/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.9603 - val_loss: 10.0367 Epoch 25/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.9507 - val_loss: 10.0276 Epoch 26/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.9411 - val_loss: 10.0185 Epoch 27/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.9316 - val_loss: 10.0095 Epoch 28/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.9221 - val_loss: 10.0005 Epoch 29/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.9127 - val_loss: 9.9916 Epoch 30/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.9034 - val_loss: 9.9828 Epoch 31/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.8941 - val_loss: 9.9740 Epoch 32/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.8850 - val_loss: 9.9652 Epoch 33/60 280960/280960 [==============================] - 1s 2us/sample - loss: 9.8758 - val_loss: 9.9565 Epoch 34/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.8667 - val_loss: 9.9478 Epoch 35/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.8577 - val_loss: 9.9392 Epoch 36/60 280960/280960 [==============================] - 1s 2us/sample - loss: 9.8488 - val_loss: 9.9307 Epoch 37/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.8399 - val_loss: 9.9222 Epoch 38/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.8311 - val_loss: 9.9137 Epoch 39/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.8223 - val_loss: 9.9053 Epoch 40/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.8136 - val_loss: 9.8969 Epoch 41/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.8050 - val_loss: 9.8886 Epoch 42/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.7964 - val_loss: 9.8803 Epoch 43/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.7879 - val_loss: 9.8720 Epoch 44/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.7794 - val_loss: 9.8639 Epoch 45/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.7709 - val_loss: 9.8557 Epoch 46/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.7626 - val_loss: 9.8476 Epoch 47/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.7543 - val_loss: 9.8395 Epoch 48/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.7460 - val_loss: 9.8315 Epoch 49/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.7378 - val_loss: 9.8236 Epoch 50/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.7297 - val_loss: 9.8157 Epoch 51/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.7216 - val_loss: 9.8078 Epoch 52/60 280960/280960 [==============================] - 1s 2us/sample - loss: 9.7136 - val_loss: 9.8000 Epoch 53/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.7057 - val_loss: 9.7922 Epoch 54/60 280960/280960 [==============================] - 1s 2us/sample - loss: 9.6978 - val_loss: 9.7845 Epoch 55/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.6899 - val_loss: 9.7768 Epoch 56/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.6821 - val_loss: 9.7692 Epoch 57/60 280960/280960 [==============================] - 0s 1us/sample - loss: 9.6743 - val_loss: 9.7616 Epoch 58/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.6666 - val_loss: 9.7541 Epoch 59/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.6590 - val_loss: 9.7466 Epoch 60/60 280960/280960 [==============================] - 0s 2us/sample - loss: 9.6514 - val_loss: 9.7391

array([‘season’, ‘team_1’, ‘team_2’, ‘home’, ‘seed_diff’, ‘score_diff’, ‘score_1’, ‘score_2’, ‘won’], dtype=object)

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> 4234/4234 [==============================] - 0s 27us/sample - loss: 10.3765

10.376511199439422

以下是 NCAA 一个比赛数据的 demo。参考 https://www.kaggle.com/c/google-cloud-ncaa-march-madness-2020-division-1-mens-tournament/overview

The blackcellmagic extension is already loaded. To reload it, use: %reload_ext blackcellmagic

Tensorflow version: 2.0.0 Keras version: 2.2.4-tf

<class ‘pandas.core.frame.DataFrame’> RangeIndex: 4716704 entries, 0 to 4716703 Data columns (total 7 columns): Season int64 T1_TeamID int64 T2_TeamID int64 Score_Diff int64 win int64 index_t1 int64 index_t2 int64 dtypes: int64(7) memory usage: 251.9 MB

Season T1_TeamID T2_TeamID Score_Diff win index_t1 index_t2
0 2003 1421 1411 8 1 214 206
1 2003 1421 1411 8 1 214 206
2 2003 1421 1411 8 1 214 206
3 2003 1421 1411 8 1 214 206
4 2003 1421 1411 8 1 214 206

min 2003 max 2019 Name: Season, dtype: int64

Model: “model” __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to
================================================================================================== input_2 (InputLayer) [(None, 1)] 0
__________________________________________________________________________________________________ input_3 (InputLayer) [(None, 1)] 0
__________________________________________________________________________________________________ Team-Strength-Model (Model) (None, 1) 249 input_2[0][0]
input_3[0][0]
__________________________________________________________________________________________________ subtract (Subtract) (None, 1) 0 Team-Strength-Model[1][0]
Team-Strength-Model[2][0]
================================================================================================== Total params: 249 Trainable params: 249 Non-trainable params: 0 __________________________________________________________________________________________________

png

png

(4716704, 7)

0 ../model/fold_id_0.pkl saved. 1 ../model/fold_id_1.pkl saved. 2 ../model/fold_id_2.pkl saved. 3 ../model/fold_id_3.pkl saved. 4 ../model/fold_id_4.pkl saved. 5 ../model/fold_id_5.pkl saved. 6 ../model/fold_id_6.pkl saved. 7 ../model/fold_id_7.pkl saved. 8 ../model/fold_id_8.pkl saved. 9 ../model/fold_id_9.pkl saved. 10 ../model/fold_id_10.pkl saved. 11 ../model/fold_id_11.pkl saved. 12 ../model/fold_id_12.pkl saved. 13 ../model/fold_id_13.pkl saved. 14 ../model/fold_id_14.pkl saved. 15 ../model/fold_id_15.pkl saved. 16 ../model/fold_id_16.pkl saved. 17 ../model/fold_id_17.pkl saved. 18 ../model/fold_id_18.pkl saved. 19 ../model/fold_id_19.pkl saved.

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 6s 1us/sample - loss: 8.2729 - val_loss: 9.5765 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2726 - val_loss: 9.5742 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2723 - val_loss: 9.5722 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2721 - val_loss: 9.5688 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2719 - val_loss: 9.5666 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2717 - val_loss: 9.5634 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2715 - val_loss: 9.5617 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2713 - val_loss: 9.5594 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2711 - val_loss: 9.5567 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2709 - val_loss: 9.5550 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 0 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2682 - val_loss: 9.5451 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2680 - val_loss: 9.5438 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2679 - val_loss: 9.5409 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2678 - val_loss: 9.5376 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2678 - val_loss: 9.5360 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2677 - val_loss: 9.5335 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2676 - val_loss: 9.5307 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2675 - val_loss: 9.5283 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5267 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5266 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 1 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5286 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5273 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5267 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5253 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5255 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2671 - val_loss: 9.5244 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2671 - val_loss: 9.5238 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2670 - val_loss: 9.5226 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2670 - val_loss: 9.5222 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2669 - val_loss: 9.5217 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 2 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 6s 1us/sample - loss: 8.2676 - val_loss: 9.5230 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2675 - val_loss: 9.5230 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2675 - val_loss: 9.5228 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5222 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5210 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5213 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5203 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5205 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5198 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2671 - val_loss: 9.5201 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 3 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5166 Epoch 2/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5164 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2673 - val_loss: 9.5158 Epoch 4/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5159 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5154 Epoch 6/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5154 Epoch 7/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5149 Epoch 8/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5144 Epoch 9/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5141 Epoch 10/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5138 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 4 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2661 - val_loss: 9.5129 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2661 - val_loss: 9.5122 Epoch 3/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5119 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5121 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5115 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5117 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2659 - val_loss: 9.5113 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2659 - val_loss: 9.5112 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2659 - val_loss: 9.5107 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5106 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 5 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5089 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5093 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5081 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5091 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5082 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5096 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5090 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5090 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5088 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5096 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 6 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5149 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5144 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5145 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5138 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2649 - val_loss: 9.5139 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5132 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5129 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2648 - val_loss: 9.5134 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2648 - val_loss: 9.5133 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2648 - val_loss: 9.5126 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 7 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5089 Epoch 2/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2668 - val_loss: 9.5088 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5083 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5083 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5084 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5082 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5082 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5088 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2667 - val_loss: 9.5082 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2667 - val_loss: 9.5084 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 8 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5080 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5081 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5077 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5074 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5069 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5077 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5072 Epoch 8/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5069 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5065 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5068 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 9 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5049 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5051 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5057 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5051 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5054 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5059 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5054 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5052 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5052 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5061 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 10 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2653 - val_loss: 9.5058 Epoch 2/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5061 Epoch 3/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5054 Epoch 4/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5061 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5051 Epoch 6/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5055 Epoch 7/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5059 Epoch 8/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5060 Epoch 9/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5058 Epoch 10/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5055 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 11 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2665 - val_loss: 9.5064 Epoch 2/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5070 Epoch 3/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5072 Epoch 4/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5078 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5072 Epoch 6/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2665 - val_loss: 9.5081 Epoch 7/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5081 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2665 - val_loss: 9.5085 Epoch 9/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5083 Epoch 10/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2665 - val_loss: 9.5090 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 12 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2646 - val_loss: 9.5069 Epoch 2/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2646 - val_loss: 9.5081 Epoch 3/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2646 - val_loss: 9.5083 Epoch 4/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2646 - val_loss: 9.5079 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2645 - val_loss: 9.5083 Epoch 6/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5082 Epoch 7/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5088 Epoch 8/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5086 Epoch 9/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5089 Epoch 10/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5095 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 13 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5111 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5108 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5115 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5111 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5116 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5123 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5122 Epoch 8/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2671 - val_loss: 9.5115 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2670 - val_loss: 9.5121 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2670 - val_loss: 9.5114 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 14 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5103 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5099 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5095 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5091 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5091 Epoch 6/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2658 - val_loss: 9.5093 Epoch 7/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2658 - val_loss: 9.5095 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5092 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5089 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5091 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 15 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5113 Epoch 2/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2669 - val_loss: 9.5127 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5117 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5116 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5117 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5115 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5117 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5120 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5114 Epoch 10/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2669 - val_loss: 9.5114 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 16 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5148 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5152 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5156 Epoch 4/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5152 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5153 Epoch 6/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5151 Epoch 7/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5155 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5154 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5152 Epoch 10/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5157 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 17 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2652 - val_loss: 9.5141 Epoch 2/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2652 - val_loss: 9.5134 Epoch 3/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2652 - val_loss: 9.5145 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5137 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5143 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5145 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5138 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5130 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5141 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5138 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 18 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2655 - val_loss: 9.5130 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2655 - val_loss: 9.5133 Epoch 3/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5135 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2655 - val_loss: 9.5133 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2655 - val_loss: 9.5137 Epoch 6/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2655 - val_loss: 9.5140 Epoch 7/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5121 Epoch 8/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5140 Epoch 9/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5136 Epoch 10/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5142 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 19 CV train shape: (4480869,) and (4480869,)

(235836, 2) (235836, 2) (235836, 2) (235836, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2)

(4716704, 1)

(4716704, 7)

(4716704, 8)

Season T1_TeamID T2_TeamID Score_Diff win index_t1 index_t2 strength
0 2003 1421 1411 8 1 214 206 -2.002989
1 2003 1421 1411 8 1 214 206 -2.003113
2 2003 1421 1411 8 1 214 206 -2.004044
3 2003 1421 1411 8 1 214 206 -2.010653
4 2003 1421 1411 8 1 214 206 -1.999414

20

../model/id2vec_model_0.h5 ../model/id2vec_model_1.h5 ../model/id2vec_model_2.h5 ../model/id2vec_model_3.h5 ../model/id2vec_model_4.h5 ../model/id2vec_model_5.h5 ../model/id2vec_model_6.h5 ../model/id2vec_model_7.h5 ../model/id2vec_model_8.h5 ../model/id2vec_model_9.h5 ../model/id2vec_model_10.h5 ../model/id2vec_model_11.h5 ../model/id2vec_model_12.h5 ../model/id2vec_model_13.h5 ../model/id2vec_model_14.h5 ../model/id2vec_model_15.h5 ../model/id2vec_model_16.h5 ../model/id2vec_model_17.h5 ../model/id2vec_model_18.h5 ../model/id2vec_model_19.h5

9.2 案例

简介中的 embedding 容易过拟合,所以这里采用 K Fold。

The blackcellmagic extension is already loaded. To reload it, use: %reload_ext blackcellmagic

Tensorflow version: 2.0.0 Keras version: 2.2.4-tf

<class ‘pandas.core.frame.DataFrame’> RangeIndex: 4716704 entries, 0 to 4716703 Data columns (total 7 columns): Season int64 T1_TeamID int64 T2_TeamID int64 Score_Diff int64 win int64 index_t1 int64 index_t2 int64 dtypes: int64(7) memory usage: 251.9 MB

Season T1_TeamID T2_TeamID Score_Diff win index_t1 index_t2
0 2003 1421 1411 8 1 214 206
1 2003 1421 1411 8 1 214 206
2 2003 1421 1411 8 1 214 206
3 2003 1421 1411 8 1 214 206
4 2003 1421 1411 8 1 214 206

min 2003 max 2019 Name: Season, dtype: int64

Model: “model” __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to
================================================================================================== input_2 (InputLayer) [(None, 1)] 0
__________________________________________________________________________________________________ input_3 (InputLayer) [(None, 1)] 0
__________________________________________________________________________________________________ Team-Strength-Model (Model) (None, 1) 249 input_2[0][0]
input_3[0][0]
__________________________________________________________________________________________________ subtract (Subtract) (None, 1) 0 Team-Strength-Model[1][0]
Team-Strength-Model[2][0]
================================================================================================== Total params: 249 Trainable params: 249 Non-trainable params: 0 __________________________________________________________________________________________________

png

png

(4716704, 7)

0 ../model/fold_id_0.pkl saved. 1 ../model/fold_id_1.pkl saved. 2 ../model/fold_id_2.pkl saved. 3 ../model/fold_id_3.pkl saved. 4 ../model/fold_id_4.pkl saved. 5 ../model/fold_id_5.pkl saved. 6 ../model/fold_id_6.pkl saved. 7 ../model/fold_id_7.pkl saved. 8 ../model/fold_id_8.pkl saved. 9 ../model/fold_id_9.pkl saved. 10 ../model/fold_id_10.pkl saved. 11 ../model/fold_id_11.pkl saved. 12 ../model/fold_id_12.pkl saved. 13 ../model/fold_id_13.pkl saved. 14 ../model/fold_id_14.pkl saved. 15 ../model/fold_id_15.pkl saved. 16 ../model/fold_id_16.pkl saved. 17 ../model/fold_id_17.pkl saved. 18 ../model/fold_id_18.pkl saved. 19 ../model/fold_id_19.pkl saved.

WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 6s 1us/sample - loss: 8.2729 - val_loss: 9.5765 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2726 - val_loss: 9.5742 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2723 - val_loss: 9.5722 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2721 - val_loss: 9.5688 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2719 - val_loss: 9.5666 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2717 - val_loss: 9.5634 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2715 - val_loss: 9.5617 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2713 - val_loss: 9.5594 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2711 - val_loss: 9.5567 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2709 - val_loss: 9.5550 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 0 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2682 - val_loss: 9.5451 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2680 - val_loss: 9.5438 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2679 - val_loss: 9.5409 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2678 - val_loss: 9.5376 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2678 - val_loss: 9.5360 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2677 - val_loss: 9.5335 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2676 - val_loss: 9.5307 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2675 - val_loss: 9.5283 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5267 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5266 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 1 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5286 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5273 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5267 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5253 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5255 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2671 - val_loss: 9.5244 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2671 - val_loss: 9.5238 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2670 - val_loss: 9.5226 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2670 - val_loss: 9.5222 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2669 - val_loss: 9.5217 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 2 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032781 samples, validate on 448087 samples Epoch 1/10 4032781/4032781 [==============================] - 6s 1us/sample - loss: 8.2676 - val_loss: 9.5230 Epoch 2/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2675 - val_loss: 9.5230 Epoch 3/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2675 - val_loss: 9.5228 Epoch 4/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5222 Epoch 5/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5210 Epoch 6/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5213 Epoch 7/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5203 Epoch 8/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5205 Epoch 9/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5198 Epoch 10/10 4032781/4032781 [==============================] - 5s 1us/sample - loss: 8.2671 - val_loss: 9.5201 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 3 CV train shape: (4480868,) and (4480868,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5166 Epoch 2/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2674 - val_loss: 9.5164 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2673 - val_loss: 9.5158 Epoch 4/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5159 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5154 Epoch 6/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2673 - val_loss: 9.5154 Epoch 7/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5149 Epoch 8/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5144 Epoch 9/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5141 Epoch 10/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2672 - val_loss: 9.5138 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 4 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2661 - val_loss: 9.5129 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2661 - val_loss: 9.5122 Epoch 3/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5119 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5121 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5115 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5117 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2659 - val_loss: 9.5113 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2659 - val_loss: 9.5112 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2659 - val_loss: 9.5107 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5106 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 5 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5089 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5093 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5081 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2645 - val_loss: 9.5091 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5082 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5096 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5090 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5090 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5088 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2644 - val_loss: 9.5096 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 6 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5149 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5144 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5145 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5138 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2649 - val_loss: 9.5139 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5132 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2649 - val_loss: 9.5129 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2648 - val_loss: 9.5134 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2648 - val_loss: 9.5133 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2648 - val_loss: 9.5126 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 7 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5089 Epoch 2/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2668 - val_loss: 9.5088 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5083 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5083 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5084 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5082 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5082 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2668 - val_loss: 9.5088 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2667 - val_loss: 9.5082 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2667 - val_loss: 9.5084 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 8 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5080 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5081 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5077 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5074 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5069 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5077 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5072 Epoch 8/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5069 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5065 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5068 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 9 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5049 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5051 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5057 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5051 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5054 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5059 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5054 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5052 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5052 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2675 - val_loss: 9.5061 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 10 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2653 - val_loss: 9.5058 Epoch 2/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5061 Epoch 3/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5054 Epoch 4/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5061 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5051 Epoch 6/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5055 Epoch 7/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5059 Epoch 8/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5060 Epoch 9/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2653 - val_loss: 9.5058 Epoch 10/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2653 - val_loss: 9.5055 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 11 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2665 - val_loss: 9.5064 Epoch 2/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5070 Epoch 3/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5072 Epoch 4/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5078 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5072 Epoch 6/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2665 - val_loss: 9.5081 Epoch 7/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5081 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2665 - val_loss: 9.5085 Epoch 9/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2665 - val_loss: 9.5083 Epoch 10/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2665 - val_loss: 9.5090 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 12 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2646 - val_loss: 9.5069 Epoch 2/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2646 - val_loss: 9.5081 Epoch 3/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2646 - val_loss: 9.5083 Epoch 4/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2646 - val_loss: 9.5079 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2645 - val_loss: 9.5083 Epoch 6/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5082 Epoch 7/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5088 Epoch 8/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5086 Epoch 9/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5089 Epoch 10/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2645 - val_loss: 9.5095 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 13 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5111 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5108 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5115 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5111 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5116 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5123 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2671 - val_loss: 9.5122 Epoch 8/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2671 - val_loss: 9.5115 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2670 - val_loss: 9.5121 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2670 - val_loss: 9.5114 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 14 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5103 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5099 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5095 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5091 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5091 Epoch 6/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2658 - val_loss: 9.5093 Epoch 7/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2658 - val_loss: 9.5095 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5092 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5089 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2658 - val_loss: 9.5091 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 15 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5113 Epoch 2/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2669 - val_loss: 9.5127 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5117 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5116 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5117 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5115 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5117 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5120 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2669 - val_loss: 9.5114 Epoch 10/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2669 - val_loss: 9.5114 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 16 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5148 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5152 Epoch 3/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5156 Epoch 4/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5152 Epoch 5/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2660 - val_loss: 9.5153 Epoch 6/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5151 Epoch 7/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5155 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5154 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2660 - val_loss: 9.5152 Epoch 10/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2660 - val_loss: 9.5157 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 17 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2652 - val_loss: 9.5141 Epoch 2/10 4032782/4032782 [==============================] - 7s 2us/sample - loss: 8.2652 - val_loss: 9.5134 Epoch 3/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2652 - val_loss: 9.5145 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5137 Epoch 5/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5143 Epoch 6/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5145 Epoch 7/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5138 Epoch 8/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5130 Epoch 9/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5141 Epoch 10/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2652 - val_loss: 9.5138 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 18 CV train shape: (4480869,) and (4480869,) WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’> Train on 4032782 samples, validate on 448087 samples Epoch 1/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2655 - val_loss: 9.5130 Epoch 2/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2655 - val_loss: 9.5133 Epoch 3/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5135 Epoch 4/10 4032782/4032782 [==============================] - 6s 1us/sample - loss: 8.2655 - val_loss: 9.5133 Epoch 5/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2655 - val_loss: 9.5137 Epoch 6/10 4032782/4032782 [==============================] - 6s 2us/sample - loss: 8.2655 - val_loss: 9.5140 Epoch 7/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5121 Epoch 8/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5140 Epoch 9/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5136 Epoch 10/10 4032782/4032782 [==============================] - 5s 1us/sample - loss: 8.2655 - val_loss: 9.5142 WARNING:tensorflow:Falling back from v2 loop because of error: Failed to find data adapter that can handle input: (<class ‘list’> containing values of types {“<class ‘pandas.core.series.Series’>”}), <class ‘NoneType’>

png

png

Fold: 19 CV train shape: (4480869,) and (4480869,)

(235836, 2) (235836, 2) (235836, 2) (235836, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2) (235835, 2)

(4716704, 1)

(4716704, 7)

(4716704, 8)

Season T1_TeamID T2_TeamID Score_Diff win index_t1 index_t2 strength
0 2003 1421 1411 8 1 214 206 -2.002989
1 2003 1421 1411 8 1 214 206 -2.003113
2 2003 1421 1411 8 1 214 206 -2.004044
3 2003 1421 1411 8 1 214 206 -2.010653
4 2003 1421 1411 8 1 214 206 -1.999414

20

../model/id2vec_model_0.h5 ../model/id2vec_model_1.h5 ../model/id2vec_model_2.h5 ../model/id2vec_model_3.h5 ../model/id2vec_model_4.h5 ../model/id2vec_model_5.h5 ../model/id2vec_model_6.h5 ../model/id2vec_model_7.h5 ../model/id2vec_model_8.h5 ../model/id2vec_model_9.h5 ../model/id2vec_model_10.h5 ../model/id2vec_model_11.h5 ../model/id2vec_model_12.h5 ../model/id2vec_model_13.h5 ../model/id2vec_model_14.h5 ../model/id2vec_model_15.h5 ../model/id2vec_model_16.h5 ../model/id2vec_model_17.h5 ../model/id2vec_model_18.h5 ../model/id2vec_model_19.h5

9.3 特征工程

得分和射门算是比较重要的两个特征变量,从转化率的角度来讲,goal 类似于分子,shot 类似于分母标,可以都考虑。

Figure 9.1: 得分和射门算是比较重要的两个特征变量,从转化率的角度来讲,goal 类似于分子,shot 类似于分母标,可以都考虑。

Nandakumar (2020) 也是研究两个队伍的胜负关系,也是一个二分类问题。

shot 和 goal 与门的距离也是特征变量,从相关性、热力图上可以看出是有区分度的,从 XGBoost 的重要性看,也是显著的距离。shot 和 goal 与门的距离也是特征变量,从相关性、热力图上可以看出是有区分度的,从 XGBoost 的重要性看,也是显著的距离。shot 和 goal 与门的距离也是特征变量,从相关性、热力图上可以看出是有区分度的,从 XGBoost 的重要性看,也是显著的距离。

Figure 9.2: shot 和 goal 与门的距离也是特征变量,从相关性、热力图上可以看出是有区分度的,从 XGBoost 的重要性看,也是显著的距离。

10 附录

10.1 .numpy()报错

更新 Tensorflow 到 2.0 版本即可。

10.3 选择更深的模型

印证了 Geoff Hinton 所言,随着深度加深,模型的效果提高。

Train on 1440 samples, validate on 160 samples Epoch 1/100 1440/1440 [==============================] - 0s 172us/sample - loss: 10.4524 - accuracy: 0.2382 - val_loss: 1.4202 - val_accuracy: 0.2750 Epoch 2/100 1440/1440 [==============================] - 0s 45us/sample - loss: 1.3890 - accuracy: 0.2507 - val_loss: 1.3859 - val_accuracy: 0.2750 Epoch 3/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3864 - accuracy: 0.2507 - val_loss: 1.3850 - val_accuracy: 0.2750 Epoch 4/100 1440/1440 [==============================] - 0s 47us/sample - loss: 1.3863 - accuracy: 0.2507 - val_loss: 1.3862 - val_accuracy: 0.2750 Epoch 5/100 1440/1440 [==============================] - 0s 47us/sample - loss: 1.3862 - accuracy: 0.2507 - val_loss: 1.3863 - val_accuracy: 0.2750 Epoch 6/100 1440/1440 [==============================] - 0s 39us/sample - loss: 1.3862 - accuracy: 0.2424 - val_loss: 1.3866 - val_accuracy: 0.2750 Epoch 7/100 1440/1440 [==============================] - 0s 40us/sample - loss: 1.3862 - accuracy: 0.2507 - val_loss: 1.3866 - val_accuracy: 0.2750 Epoch 8/100 1440/1440 [==============================] - 0s 52us/sample - loss: 1.3861 - accuracy: 0.2507 - val_loss: 1.3868 - val_accuracy: 0.2750 Epoch 9/100 1440/1440 [==============================] - 0s 59us/sample - loss: 1.3861 - accuracy: 0.2458 - val_loss: 1.3870 - val_accuracy: 0.2000 Epoch 10/100 1440/1440 [==============================] - 0s 55us/sample - loss: 1.3861 - accuracy: 0.2535 - val_loss: 1.3871 - val_accuracy: 0.2000 Epoch 11/100 1440/1440 [==============================] - 0s 57us/sample - loss: 1.3861 - accuracy: 0.2535 - val_loss: 1.3872 - val_accuracy: 0.2000 Epoch 12/100 1440/1440 [==============================] - 0s 62us/sample - loss: 1.3861 - accuracy: 0.2535 - val_loss: 1.3873 - val_accuracy: 0.2000 Epoch 13/100 1440/1440 [==============================] - 0s 49us/sample - loss: 1.3861 - accuracy: 0.2326 - val_loss: 1.3862 - val_accuracy: 0.1937 Epoch 14/100 1440/1440 [==============================] - 0s 57us/sample - loss: 1.3861 - accuracy: 0.2535 - val_loss: 1.3875 - val_accuracy: 0.2000 Epoch 15/100 1440/1440 [==============================] - 0s 46us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3875 - val_accuracy: 0.2000 Epoch 16/100 1440/1440 [==============================] - 0s 51us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 17/100 1440/1440 [==============================] - 0s 50us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3877 - val_accuracy: 0.2000 Epoch 18/100 1440/1440 [==============================] - 0s 58us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3876 - val_accuracy: 0.2000 Epoch 19/100 1440/1440 [==============================] - 0s 51us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 20/100 1440/1440 [==============================] - 0s 47us/sample - loss: 1.3863 - accuracy: 0.2431 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 21/100 1440/1440 [==============================] - 0s 51us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3876 - val_accuracy: 0.2000 Epoch 22/100 1440/1440 [==============================] - 0s 56us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 23/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 24/100 1440/1440 [==============================] - 0s 49us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 25/100 1440/1440 [==============================] - 0s 59us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 26/100 1440/1440 [==============================] - 0s 51us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 27/100 1440/1440 [==============================] - 0s 53us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 28/100 1440/1440 [==============================] - 0s 52us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 29/100 1440/1440 [==============================] - 0s 48us/sample - loss: 1.3862 - accuracy: 0.2313 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 30/100 1440/1440 [==============================] - 0s 46us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 31/100 1440/1440 [==============================] - 0s 48us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 32/100 1440/1440 [==============================] - 0s 53us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 33/100 1440/1440 [==============================] - 0s 51us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 34/100 1440/1440 [==============================] - 0s 52us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 35/100 1440/1440 [==============================] - 0s 54us/sample - loss: 1.3862 - accuracy: 0.2403 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 36/100 1440/1440 [==============================] - 0s 51us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 37/100 1440/1440 [==============================] - 0s 56us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 38/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3863 - accuracy: 0.2424 - val_loss: 1.3877 - val_accuracy: 0.2000 Epoch 39/100 1440/1440 [==============================] - 0s 43us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 40/100 1440/1440 [==============================] - 0s 41us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 41/100 1440/1440 [==============================] - 0s 42us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 42/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 43/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 44/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 45/100 1440/1440 [==============================] - 0s 43us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 46/100 1440/1440 [==============================] - 0s 37us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 47/100 1440/1440 [==============================] - 0s 46us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 48/100 1440/1440 [==============================] - 0s 46us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 49/100 1440/1440 [==============================] - 0s 50us/sample - loss: 1.3863 - accuracy: 0.2403 - val_loss: 1.3878 - val_accuracy: 0.2500 Epoch 50/100 1440/1440 [==============================] - 0s 45us/sample - loss: 1.3862 - accuracy: 0.2514 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 51/100 1440/1440 [==============================] - 0s 41us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 52/100 1440/1440 [==============================] - 0s 39us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 53/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 54/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 55/100 1440/1440 [==============================] - 0s 41us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 56/100 1440/1440 [==============================] - 0s 42us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 57/100 1440/1440 [==============================] - 0s 37us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 58/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 59/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 60/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 61/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 62/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3863 - accuracy: 0.2465 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 63/100 1440/1440 [==============================] - 0s 34us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 64/100 1440/1440 [==============================] - 0s 34us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 65/100 1440/1440 [==============================] - 0s 44us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 66/100 1440/1440 [==============================] - 0s 38us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 67/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 68/100 1440/1440 [==============================] - 0s 34us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 69/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 70/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 71/100 1440/1440 [==============================] - 0s 37us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 72/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3882 - val_accuracy: 0.2000 Epoch 73/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 74/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 75/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3863 - accuracy: 0.2319 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 76/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 77/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 78/100 1440/1440 [==============================] - 0s 39us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3882 - val_accuracy: 0.2000 Epoch 79/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 80/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 81/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3862 - accuracy: 0.2382 - val_loss: 1.3877 - val_accuracy: 0.2000 Epoch 82/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2486 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 83/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 84/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 85/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 86/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3882 - val_accuracy: 0.2000 Epoch 87/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 88/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 89/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 90/100 1440/1440 [==============================] - 0s 45us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 91/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 92/100 1440/1440 [==============================] - 0s 33us/sample - loss: 1.3862 - accuracy: 0.2396 - val_loss: 1.3878 - val_accuracy: 0.2000 Epoch 93/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 94/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 95/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 96/100 1440/1440 [==============================] - 0s 35us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 97/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3863 - accuracy: 0.2542 - val_loss: 1.3880 - val_accuracy: 0.2000 Epoch 98/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3879 - val_accuracy: 0.2000 Epoch 99/100 1440/1440 [==============================] - 0s 32us/sample - loss: 1.3862 - accuracy: 0.2542 - val_loss: 1.3881 - val_accuracy: 0.2000 Epoch 100/100 1440/1440 [==============================] - 0s 36us/sample - loss: 1.3862 - accuracy: 0.2354 - val_loss: 1.3880 - val_accuracy: 0.2000

<tensorflow.python.keras.callbacks.History at 0xb2c20efd0>

2000/2000 [==============================] - 0s 48us/sample - loss: 1.3866 - accuracy: 0.2495

[1.3865726413726807, 0.2495]

array([[0.24210231, 0.2508569 , 0.25387233, 0.25316837], [0.24210231, 0.2508569 , 0.25387233, 0.25316837], [0.24210231, 0.2508569 , 0.25387233, 0.25316837], [0.24210231, 0.2508569 , 0.25387233, 0.25316837], [0.24210231, 0.2508569 , 0.25387233, 0.25316837]], dtype=float32)

模型效果差不仅仅体现在准确率低,还有每个样本预测值不变,这是很差的表现。

10.4 防止过拟合

You may have noticed that the final train accuracy, acc, is higher than the final validation accuracy, val_acc. This suggests that you are overfitting and may want to reduce the number of epochs. (Hull 2019)

因此防止过拟合的方式是减少迭代次数?

10.5 完成证书

10.6 sparse_categorical_crossentropycategorical_crossentropy 区别

参考 Lin (2018)

If your targets are one-hot encoded, use categorical_crossentropy. Examples of one-hot encodings:

[1,0,0]
[0,1,0]
[0,0,1]

But if your targets are integers, use sparse_categorical_crossentropy. Examples of integer encodings (for the sake of completion):

1
2
3

Model: “sequential_7” _________________________________________________________________ Layer (type) Output Shape Param #
================================================================= dense_41 (Dense) (None, 512) 401920
_________________________________________________________________ dense_42 (Dense) (None, 256) 131328
_________________________________________________________________ dense_43 (Dense) (None, 128) 32896
_________________________________________________________________ dense_44 (Dense) (None, 64) 8256
_________________________________________________________________ dense_45 (Dense) (None, 32) 2080
_________________________________________________________________ dense_46 (Dense) (None, 16) 528
_________________________________________________________________ dense_47 (Dense) (None, 8) 136
_________________________________________________________________ dense_48 (Dense) (None, 4) 36
================================================================= Total params: 577,180 Trainable params: 577,180 Non-trainable params: 0 _________________________________________________________________ None

<class ‘numpy.ndarray’> <class ‘numpy.ndarray’> (2000, 784) (2000,) <class ‘numpy.ndarray’> <class ‘numpy.ndarray’> (1600, 784) (1600,) <class ‘numpy.ndarray’> <class ‘numpy.ndarray’> (400, 784) (400,)

Train on 1440 samples, validate on 160 samples Epoch 1/200 1440/1440 [==============================] - 0s 133us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 2/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 3/200 1440/1440 [==============================] - 0s 109us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 4/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 5/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 6/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 7/200 1440/1440 [==============================] - 0s 119us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 8/200 1440/1440 [==============================] - 0s 117us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 9/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 10/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 11/200 1440/1440 [==============================] - 0s 114us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 12/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 13/200 1440/1440 [==============================] - 0s 118us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 14/200 1440/1440 [==============================] - 0s 124us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 15/200 1440/1440 [==============================] - 0s 123us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 16/200 1440/1440 [==============================] - 0s 128us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 17/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 18/200 1440/1440 [==============================] - 0s 132us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 19/200 1440/1440 [==============================] - 0s 147us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 20/200 1440/1440 [==============================] - 0s 147us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 21/200 1440/1440 [==============================] - 0s 150us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 22/200 1440/1440 [==============================] - 0s 139us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 23/200 1440/1440 [==============================] - 0s 126us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 24/200 1440/1440 [==============================] - 0s 123us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 25/200 1440/1440 [==============================] - 0s 124us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 26/200 1440/1440 [==============================] - 0s 137us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 27/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 28/200 1440/1440 [==============================] - 0s 120us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 29/200 1440/1440 [==============================] - 0s 114us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 30/200 1440/1440 [==============================] - 0s 120us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 31/200 1440/1440 [==============================] - 0s 120us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 32/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 33/200 1440/1440 [==============================] - 0s 123us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 34/200 1440/1440 [==============================] - 0s 145us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 35/200 1440/1440 [==============================] - 0s 138us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 36/200 1440/1440 [==============================] - 0s 131us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 37/200 1440/1440 [==============================] - 0s 134us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 38/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 39/200 1440/1440 [==============================] - 0s 118us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 40/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 41/200 1440/1440 [==============================] - 0s 109us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 42/200 1440/1440 [==============================] - 0s 109us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 43/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 44/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 45/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 46/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 47/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 48/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 49/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 50/200 1440/1440 [==============================] - 0s 122us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 51/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 52/200 1440/1440 [==============================] - 0s 116us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 53/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 54/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 55/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 56/200 1440/1440 [==============================] - 0s 101us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 57/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 58/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 59/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 60/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 61/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 62/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 63/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 64/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 65/200 1440/1440 [==============================] - 0s 103us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 66/200 1440/1440 [==============================] - 0s 109us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 67/200 1440/1440 [==============================] - 0s 101us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 68/200 1440/1440 [==============================] - 0s 101us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 69/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 70/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 71/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 72/200 1440/1440 [==============================] - 0s 101us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 73/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 74/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 75/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 76/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 77/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 78/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 79/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 80/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 81/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 82/200 1440/1440 [==============================] - 0s 101us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 83/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3891 - val_accuracy: 0.2188 Epoch 84/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 85/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 86/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 87/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 88/200 1440/1440 [==============================] - 0s 101us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 89/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 90/200 1440/1440 [==============================] - 0s 103us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 91/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 92/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 93/200 1440/1440 [==============================] - 0s 116us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 94/200 1440/1440 [==============================] - 0s 109us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 95/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 96/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 97/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 98/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 99/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 100/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 101/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3863 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 102/200 1440/1440 [==============================] - 0s 103us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3891 - val_accuracy: 0.2188 Epoch 103/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 104/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 105/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 106/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 107/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 108/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 109/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 110/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 111/200 1440/1440 [==============================] - 0s 103us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 112/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 113/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 114/200 1440/1440 [==============================] - 0s 106us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 115/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 116/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 117/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 118/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 119/200 1440/1440 [==============================] - 0s 104us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 120/200 1440/1440 [==============================] - 0s 103us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 121/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 122/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 123/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 124/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 125/200 1440/1440 [==============================] - 0s 102us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 126/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 127/200 1440/1440 [==============================] - 0s 105us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 128/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 129/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 130/200 1440/1440 [==============================] - 0s 114us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 131/200 1440/1440 [==============================] - 0s 119us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 132/200 1440/1440 [==============================] - 0s 114us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 133/200 1440/1440 [==============================] - 0s 124us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 134/200 1440/1440 [==============================] - 0s 117us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 135/200 1440/1440 [==============================] - 0s 117us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 136/200 1440/1440 [==============================] - 0s 120us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 137/200 1440/1440 [==============================] - 0s 124us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 138/200 1440/1440 [==============================] - 0s 117us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 139/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 140/200 1440/1440 [==============================] - 0s 122us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3899 - val_accuracy: 0.2188 Epoch 141/200 1440/1440 [==============================] - 0s 125us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 142/200 1440/1440 [==============================] - 0s 127us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 143/200 1440/1440 [==============================] - 0s 121us/sample - loss: 1.3863 - accuracy: 0.2569 - val_loss: 1.3890 - val_accuracy: 0.2188 Epoch 144/200 1440/1440 [==============================] - 0s 118us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 145/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 146/200 1440/1440 [==============================] - 0s 122us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 147/200 1440/1440 [==============================] - 0s 132us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 148/200 1440/1440 [==============================] - 0s 125us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 149/200 1440/1440 [==============================] - 0s 126us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 150/200 1440/1440 [==============================] - 0s 129us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 151/200 1440/1440 [==============================] - 0s 133us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 152/200 1440/1440 [==============================] - 0s 126us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 153/200 1440/1440 [==============================] - 0s 126us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 154/200 1440/1440 [==============================] - 0s 127us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 155/200 1440/1440 [==============================] - 0s 129us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 156/200 1440/1440 [==============================] - 0s 130us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 157/200 1440/1440 [==============================] - 0s 130us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 158/200 1440/1440 [==============================] - 0s 137us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 159/200 1440/1440 [==============================] - 0s 127us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 160/200 1440/1440 [==============================] - 0s 126us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 161/200 1440/1440 [==============================] - 0s 130us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 162/200 1440/1440 [==============================] - 0s 136us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 163/200 1440/1440 [==============================] - 0s 130us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 164/200 1440/1440 [==============================] - 0s 124us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 165/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 166/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 167/200 1440/1440 [==============================] - 0s 117us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 168/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 169/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 170/200 1440/1440 [==============================] - 0s 118us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 171/200 1440/1440 [==============================] - 0s 122us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 172/200 1440/1440 [==============================] - 0s 109us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 173/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 174/200 1440/1440 [==============================] - 0s 120us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 175/200 1440/1440 [==============================] - 0s 116us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3892 - val_accuracy: 0.2188 Epoch 176/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 177/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 178/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3898 - val_accuracy: 0.2188 Epoch 179/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 180/200 1440/1440 [==============================] - 0s 112us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 181/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 182/200 1440/1440 [==============================] - 0s 123us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 183/200 1440/1440 [==============================] - 0s 117us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 184/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 185/200 1440/1440 [==============================] - 0s 124us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 186/200 1440/1440 [==============================] - 0s 127us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 187/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 188/200 1440/1440 [==============================] - 0s 110us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 189/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 190/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 191/200 1440/1440 [==============================] - 0s 114us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188 Epoch 192/200 1440/1440 [==============================] - 0s 108us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 193/200 1440/1440 [==============================] - 0s 111us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3895 - val_accuracy: 0.2188 Epoch 194/200 1440/1440 [==============================] - 0s 118us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 195/200 1440/1440 [==============================] - 0s 121us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 196/200 1440/1440 [==============================] - 0s 113us/sample - loss: 1.3861 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 197/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3894 - val_accuracy: 0.2188 Epoch 198/200 1440/1440 [==============================] - 0s 115us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3897 - val_accuracy: 0.2188 Epoch 199/200 1440/1440 [==============================] - 0s 119us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3893 - val_accuracy: 0.2188 Epoch 200/200 1440/1440 [==============================] - 0s 107us/sample - loss: 1.3862 - accuracy: 0.2569 - val_loss: 1.3896 - val_accuracy: 0.2188

<tensorflow.python.keras.callbacks.History at 0x1a3a5a7278>

400/400 [==============================] - 0s 83us/sample - loss: 1.3870 - accuracy: 0.2375

[1.3869930458068849, 0.2375]

这样看下来真是炼金术,效果突然变得好差。

10.7 CV 例子

Dataset Information: Dataset is extracted from the electric current drive signals. The drive contains intact as well as defective components. Therefore, dataset has 11 classes based on the condition of the components. Aim is, to predict the correct component condition based on the input variables using Deep Learning technique. Tools used: Keras TensorFlow

Dataset Rights: This dataset has been taken from “University of California Irvine Machine Learning Repository” for the knowledge purpose and all the rights for this dataset are reserved by them. For more details like content of the dataset, owner of the dataset and reference research paper, please refer the following link: https://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis

10.7.1 Import and print data

       0             1         2             3             4         5   \
0 -3.014600e-07  8.260300e-06 -0.000012 -2.309800e-06 -1.438600e-06 -0.000021   
1  2.913200e-06 -5.247700e-06  0.000003 -6.056100e-06  2.778900e-06 -0.000004   
2 -2.951700e-06 -3.184000e-06 -0.000016 -1.208400e-06 -1.575300e-06  0.000017   
3 -1.322600e-06  8.820100e-06 -0.000016 -4.811100e-06 -7.282900e-07  0.000004   
4 -6.836600e-08  5.666300e-07 -0.000026 -6.490100e-06 -7.940600e-07  0.000013   
5 -9.584900e-07  5.214300e-08 -0.000047  6.453700e-07 -2.304100e-06  0.000055   
6 -1.783700e-06  1.303600e-06 -0.000038 -7.083900e-06  6.706300e-06  0.000022   
7 -2.566600e-06 -1.679500e-07  0.000015 -1.598400e-06  8.709200e-07  0.000015   
8 -5.474000e-06  1.086500e-07 -0.000011 -1.815600e-06  4.757800e-07  0.000024   
9  2.825600e-06  6.067600e-06  0.000118 -4.347500e-06  1.492300e-06 -0.000003   

         6         7         8         9   ...       39       40       41  \
0  0.031718  0.031710  0.031721 -0.032963  ... -0.63308   2.9646   8.1198   
1  0.030804  0.030810  0.030806 -0.033520  ... -0.59314   7.6252   6.1690   
2  0.032877  0.032880  0.032896 -0.029834  ... -0.63252   2.7784   5.3017   
3  0.029410  0.029401  0.029417 -0.030156  ... -0.62289   6.5534   6.2606   
4  0.030119  0.030119  0.030145 -0.031393  ... -0.63010   4.5155   9.5231   
5  0.031154  0.031154  0.031201 -0.032789  ... -0.61124   5.8337  18.6970   
6  0.033805  0.033804  0.033841 -0.033345  ... -0.62501  17.6180   9.0823   
7  0.031071  0.031071  0.031056 -0.029695  ... -0.63618  12.8380   4.9079   
8  0.031309  0.031308  0.031319 -0.031344  ... -0.62465   2.9841   6.6883   
9  0.030947  0.030941  0.030823 -0.032171  ... -0.58170   2.0511   8.8939   

       42      43      44      45      46      47  48  
0 -1.4961 -1.4961 -1.4961 -1.4996 -1.4996 -1.4996   1  
1 -1.4967 -1.4967 -1.4967 -1.5005 -1.5005 -1.5005   1  
2 -1.4983 -1.4983 -1.4982 -1.4985 -1.4985 -1.4985   1  
3 -1.4963 -1.4963 -1.4963 -1.4975 -1.4975 -1.4976   1  
4 -1.4958 -1.4958 -1.4958 -1.4959 -1.4959 -1.4959   1  
5 -1.4956 -1.4956 -1.4956 -1.4973 -1.4972 -1.4973   1  
6 -1.4960 -1.4960 -1.4959 -1.4954 -1.4954 -1.4953   1  
7 -1.4990 -1.4990 -1.4990 -1.4968 -1.4968 -1.4968   1  
8 -1.4955 -1.4955 -1.4955 -1.4947 -1.4947 -1.4947   1  
9 -1.4963 -1.4963 -1.4963 -1.4943 -1.4943 -1.4943   1  

[10 rows x 49 columns]

array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])

10.7.2 Data pre-processing

0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 16 0 17 0 18 0 19 0 20 0 21 0 22 0 23 0 24 0 25 0 26 0 27 0 28 0 29 0 30 0 31 0 32 0 33 0 34 0 35 0 36 0 37 0 38 0 39 0 40 0 41 0 42 0 43 0 44 0 45 0 46 0 47 0 48 0 dtype: int64

(58509, 48) (58509,)

           0             1             2             3             4   \
count  58509.000000  5.850900e+04  5.850900e+04  58509.000000  5.850900e+04   
mean      -0.000003  1.439648e-06  1.412013e-06     -0.000001  1.351239e-06   
std        0.000072  5.555429e-05  2.353009e-04      0.000063  5.660943e-05   
min       -0.013721 -5.414400e-03 -1.358000e-02     -0.012787 -8.355900e-03   
25%       -0.000007 -1.444400e-05 -7.239600e-05     -0.000005 -1.475300e-05   
50%       -0.000003  8.804600e-07  5.137700e-07     -0.000001  7.540200e-07   
75%        0.000002  1.877700e-05  7.520000e-05      0.000004  1.906200e-05   
max        0.005784  4.525300e-03  5.237700e-03      0.001453  8.245100e-04   

                 5             6             7             8             9   \
count  5.850900e+04  58509.000000  58509.000000  58509.000000  58509.000000   
mean  -2.654483e-07      0.001915      0.001913      0.001912     -0.011897   
std    2.261907e-04      0.036468      0.036465      0.036470      0.066482   
min   -9.741300e-03     -0.139890     -0.135940     -0.130860     -0.218640   
25%   -7.379100e-05     -0.019927     -0.019951     -0.019925     -0.032144   
50%   -1.659300e-07      0.013226      0.013230      0.013247     -0.015566   
75%    7.138600e-05      0.024770      0.024776      0.024777      0.020614   
max    2.753600e-03      0.069125      0.069130      0.069131      0.352580   

       ...            38            39            40            41  \
count  ...  58509.000000  58509.000000  58509.000000  58509.000000   
mean   ...      8.406765     -0.397757      7.293781      8.273772   
std    ...      6.897301     25.018728     12.451781      6.565952   
min    ...      0.522180     -0.902350     -0.596830      0.320660   
25%    ...      4.451300     -0.715470      1.450300      4.436300   
50%    ...      6.566800     -0.661710      3.301300      6.479100   
75%    ...      9.952600     -0.573980      8.288500      9.857500   
max    ...    265.330000   3670.800000    889.930000    153.150000   

                 42            43            44            45            46  \
count  58509.000000  58509.000000  58509.000000  58509.000000  58509.000000   
mean      -1.500887     -1.500912     -1.500805     -1.497771     -1.497794   
std        0.003657      0.003668      0.003632      0.003163      0.003163   
min       -1.525500     -1.526200     -1.523700     -1.521400     -1.523200   
25%       -1.503300     -1.503400     -1.503200     -1.499600     -1.499600   
50%       -1.500300     -1.500300     -1.500300     -1.498100     -1.498100   
75%       -1.498200     -1.498200     -1.498200     -1.496200     -1.496300   
max       -1.457600     -1.456100     -1.455500     -1.337200     -1.337200   

                 47  
count  58509.000000  
mean      -1.497686  
std        0.003175  
min       -1.521300  
25%       -1.499500  
50%       -1.498000  
75%       -1.496200  
max       -1.337100  

[8 rows x 48 columns]

10.7.4 Define cross-validation and train pre-defined model


KeyboardInterrupt Traceback (most recent call last)

in —-> 1 results = cross_val_score(estimator, x, y, cv = kfold) 2 print(“Result: %.2f%% (%.2f%%)” % (results.mean()100, results.std()100))

~/miniconda3/lib/python3.7/site-packages/sklearn/model_selection/_validation.py in cross_val_score(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, error_score) 387 fit_params=fit_params, 388 pre_dispatch=pre_dispatch, –> 389 error_score=error_score) 390 return cv_results[‘test_score’] 391

~/miniconda3/lib/python3.7/site-packages/sklearn/model_selection/_validation.py in cross_validate(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, return_train_score, return_estimator, error_score) 229 return_times=True, return_estimator=return_estimator, 230 error_score=error_score) –> 231 for train, test in cv.split(X, y, groups)) 232 233 zipped_scores = list(zip(*scores))

~/miniconda3/lib/python3.7/site-packages/joblib/parallel.py in call(self, iterable) 922 self._iterating = self._original_iterator is not None 923 –> 924 while self.dispatch_one_batch(iterator): 925 pass 926

~/miniconda3/lib/python3.7/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator) 757 return False 758 else: –> 759 self._dispatch(tasks) 760 return True 761

~/miniconda3/lib/python3.7/site-packages/joblib/parallel.py in _dispatch(self, batch) 714 with self._lock: 715 job_idx = len(self._jobs) –> 716 job = self._backend.apply_async(batch, callback=cb) 717 # A job can complete so quickly than its callback is 718 # called before we get here, causing self._jobs to

~/miniconda3/lib/python3.7/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback) 180 def apply_async(self, func, callback=None): 181 ""“Schedule a func to be run”"" –> 182 result = ImmediateResult(func) 183 if callback: 184 callback(result)

~/miniconda3/lib/python3.7/site-packages/joblib/_parallel_backends.py in init(self, batch) 547 # Don’t delay the application, to avoid keeping the input 548 # arguments in memory –> 549 self.results = batch() 550 551 def get(self):

~/miniconda3/lib/python3.7/site-packages/joblib/parallel.py in call(self) 223 with parallel_backend(self._backend, n_jobs=self._n_jobs): 224 return [func(*args, **kwargs) –> 225 for func, args, kwargs in self.items] 226 227 def len(self):

~/miniconda3/lib/python3.7/site-packages/joblib/parallel.py in (.0) 223 with parallel_backend(self._backend, n_jobs=self._n_jobs): 224 return [func(*args, **kwargs) –> 225 for func, args, kwargs in self.items] 226 227 def len(self):

~/miniconda3/lib/python3.7/site-packages/sklearn/model_selection/_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, return_estimator, error_score) 512 estimator.fit(X_train, fit_params) 513 else: –> 514 estimator.fit(X_train, y_train, fit_params) 515 516 except Exception as e:

~/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/wrappers/scikit_learn.py in fit(self, x, y, kwargs) 221 raise ValueError(‘Invalid shape for y:’ + str(y.shape)) 222 self.n_classes_ = len(self.classes_) –> 223 return super(KerasClassifier, self).fit(x, y, kwargs) 224 225 def predict(self, x, **kwargs):

~/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/wrappers/scikit_learn.py in fit(self, x, y, kwargs) 164 fit_args.update(kwargs) 165 –> 166 history = self.model.fit(x, y, fit_args) 167 168 return history

~/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 871 validation_steps=validation_steps, 872 validation_freq=validation_freq, –> 873 steps_name=‘steps_per_epoch’) 874 875 def evaluate(self,

~/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 361 # Callbacks batch end. 362 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode) –> 363 callbacks._call_batch_hook(mode, ‘end’, batch_index, batch_logs) 364 progbar.on_batch_end(batch_index, batch_logs) 365

~/miniconda3/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs) 228 self._delta_ts[hook_name].append(time.time() - t_before_callbacks) 229 –> 230 delta_t_median = np.median(self._delta_ts[hook_name]) 231 if (self._delta_t_batch > 0. and 232 delta_t_median > 0.95 * self._delta_t_batch and delta_t_median > 0.1):

~/miniconda3/lib/python3.7/site-packages/numpy/lib/function_base.py in median(a, axis, out, overwrite_input, keepdims) 3334 """ 3335 r, k = _ureduce(a, func=_median, axis=axis, out=out, -> 3336 overwrite_input=overwrite_input) 3337 if keepdims: 3338 return r.reshape(k)

~/miniconda3/lib/python3.7/site-packages/numpy/lib/function_base.py in _ureduce(a, func, **kwargs) 3223 3224 """ -> 3225 a = np.asanyarray(a) 3226 axis = kwargs.get(‘axis’, None) 3227 if axis is not None:

~/miniconda3/lib/python3.7/site-packages/numpy/core/numeric.py in asanyarray(a, dtype, order) 551 552 """ –> 553 return array(a, dtype, copy=False, order=order, subok=True) 554 555

KeyboardInterrupt:

感觉要跑挺长时间的,在 Kaggle 用 GPU 跑。

10.8 BN 和 Dropout

Model: “sequential” _________________________________________________________________ Layer (type) Output Shape Param #
================================================================= dense (Dense) (None, 32) 25120
_________________________________________________________________ batch_normalization_v2 (Batc (None, 32) 128
_________________________________________________________________ dropout (Dropout) (None, 32) 0
_________________________________________________________________ dense_1 (Dense) (None, 16) 528
_________________________________________________________________ batch_normalization_v2_1 (Ba (None, 16) 64
_________________________________________________________________ dropout_1 (Dropout) (None, 16) 0
_________________________________________________________________ dense_2 (Dense) (None, 8) 136
_________________________________________________________________ batch_normalization_v2_2 (Ba (None, 8) 32
_________________________________________________________________ dropout_2 (Dropout) (None, 8) 0
_________________________________________________________________ dense_3 (Dense) (None, 4) 36
================================================================= Total params: 26,044 Trainable params: 25,932 Non-trainable params: 112 _________________________________________________________________ None

png

png

10.9 h5 模型文件

$ du -hs output/naive-model.h5
404K    output/naive-model.h5

不大。

10.10 梯度下降

10.10.1 前言

本文参考 Ng (2008) 的机器学习课程,有兴趣文末有学习视频的链接。

工作以后这种课,只能重sense,轻理论推导了,重在代码实现,这里不涉及代码,因此这里的文字帮助理解代码逻辑而已,因为代码肯定是矩阵运算,因此看懂了逻辑去看代码是合理的。

  • 先讲了一些字母的notation,
  • 讲解了学习率\(\alpha\)的公式,这里可以讲讲sense,
  • 两种主要的梯度下降方式,
  • 一个实例

10.10.2 字母的notation

和经济不一样,一些notation可以对比理解下,Andrew还是解释得很有sense的。

\[Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \mu\]

然后我们去预测

\[\hat y = \hat \beta_0 + \hat \beta_1 x_1 + \hat \beta_2 x_2\]

这里定义,

\[h_{\theta}(X) = \theta_0 + \theta_1 x_2+ \theta_2 x_2 \]

  • \(h\)表示hypothesis,比如具体用什么模型等等。
  • \(\theta\)表示hypothesis,参数。
  • \(X\)表示特征向量,
  • 因此这里\(h_{\theta}(X)\)就是\(\hat y\)

因此 \[J(\theta) = \frac{1}{2}\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2\]

\(x^{i}\)喜欢写在上面,\(m\)是样本数。

10.10.3 学习率\(\alpha\)的公式

一句话,逐步修改\(\theta\)来减小\(J(\theta)\)

\[\theta_i:=\theta_i - \alpha \frac{\partial}{\partial \theta_i}J(\theta)\]

  • 这里\(i = 1,...,n\)表示第\(i\)特征向量的参数。
  • \(i:\)表示通过右边的公式更新左边。
  • 这里的sense就是,\(\frac{\partial}{\partial \theta_i}J(\theta) = 0\)时,一阶导成立条件,得到最值。

如果\(\frac{\partial}{\partial \theta_i}J(\theta) > 0\),说明\(\theta_i \uparrow \to J(\theta) \uparrow\),因此得反响操作,更新的\(\theta_i\)变小。 这个公式也是如此的,如果 \(\frac{\partial}{\partial \theta_i}J(\theta) = 0\)\(\alpha > 0\), 那么\(\theta_i^{old}减去一个正数,因此在减小\),反之亦然。

因此这个学习率公式是合适的执行程序。

另外\(\alpha > 0\)控制了\(\theta_i\)更新的步子,不能太大太小,实际中,可以i 通过Grid来寻找最好的\(\alpha\)

\(\Box\)为什么梯度下降一定是最都陡峭的方式下降?

10.10.4 两种主要的梯度下降方式

10.10.4.1 Batch Gradient Descent

\[\begin{alignat}{2} \theta_i & :=\theta_i - \alpha \frac{\partial}{\partial \theta_i}J(\theta) \\ & := \theta_i - \alpha \sum_{j=1}^m(h_{\theta}(X)-y^{(j)}){x_i^j} \\ \end{alignat}\]

因为对于每个特征向量的参数\(\theta_i\)要扫描\(1 \to m\)个样本,很累。 因此提出了第二种,就是解决这种问题。

10.10.4.2 Stochastic Gradient Descent

\[\theta_i := \theta_i - \alpha (h_{\theta}(x^{(j)})-y^{(j)})x_i^{j}\]

显然这里只针对第\(j\)个样本进行迭代,其他的非\(j\)的样本不使用。

然后进行for循环,对于刚才完成好的,只针对第\(j\)个样本,得到的\(\theta_i\), 完成\(j = 1,...,m\)的继续训练。 这样每次for loop不需要m个样本都进去训练,不用太累。

10.10.5 梯度下降实现方式 in practice

假设我们有数据

目的是使用梯度下降完成回归问题

\[Y = \hat \beta_0 + \hat \beta_1 X + \hat \mu\] 估计出\(\hat \beta_0\)\(\hat \beta_1\)

\[\begin{cases} \hat \beta_0:=\hat \beta_0 - \alpha \frac{\partial L}{\partial \hat \beta_0}\\ \hat \beta_1:=\hat \beta_1 - \alpha \frac{\partial L}{\partial \hat \beta_1} \end{cases}\]

\[\begin{cases} \frac{\partial L}{\partial \hat \beta_0} = (Y - \hat Y)\\ \frac{\partial L}{\partial \hat \beta_1} = (Y - \hat Y)X\\ \end{cases}\]

然后可以操作了,下面是Python的代码,可以直接在jupyter中执行 (Rizwan 2018)

  1. 这个地方是 print cost,但是按照10的步数完成,不然太多看不过来。
  2. cost 一直下降是因为这是train组的cost,如果是test组的就不一定了。

参考文献

Deane-Mayer, Zachary. 2018. “Classify and Regress at the Same Time.” Cross Validated Conference 2020. 2018. https://stats.stackexchange.com/q/364710.

———. 2019. “Advanced Deep Learning with Keras in Python.” DataCamp. 2019. https://www.datacamp.com/courses/advanced-deep-learning-with-keras-in-python.

Géron, Aurélien. 2019. Hands-on Machine Learning with Scikit-Learn, Keras, and Tensorflow. 2nd ed. United States of America: O’Reilly Media, Inc.

Hull, Isaiah. 2019. “Introduction to Tensorflow in Python.” DataCamp. 2019. https://www.datacamp.com/courses/introduction-to-tensorflow-in-python.

Kul, Nitish. 2019. “Multi-Class Classification with Keras Tensorflow.” Kaggle. 2019. https://www.kaggle.com/nitishkulkarni1006/multi-class-classification-with-keras-tensorflow.

Lin, Jovian. 2018. Jovian’s Notes. 2018. https://jovianlin.io/cat-crossentropy-vs-sparse-cat-crossentropy/.

Nandakumar, Namita. 2020. “R + Tidyverse in Sports.” RStudio Conference 2020. 2020. https://resources.rstudio.com/rstudio-conf-2020/r-tidyverse-in-sports-namita-nandakumar.

Ng, Andrew. 2008. “监督学习应用梯度下降.” 2008. https://open.163.com/movie/2008/1/B/O/M6SGF6VB4_M6SGHJ9BO.html.

Rizwan, Muhammad. 2018. “How to Select Activation Function for Deep Neural Network.” 2018. https://engmrk.com/activation-function-for-dnn/.