The morning after in terms of leaving my computer up and running all night, btw.
The numbers exist. They’re even worse than I expected, and I half-heartedly expected fire. But that’s actually not so bad, because I can start poking at them now.
I’m using somewhere in the vicinity of 12,000 samples, each one 20,000 data points raw. I downsample by factors of 2, 3, 4 or whatever so they don’t throw memory errors, and up-and-downsample batch sizes for accuracy. I was really surprised that didn’t make a whole lot of difference.
My validation dataset may be the issue, so that’s the next setting to change. After that, I’ve got a dozen ways to manipulate GradientThreshold, batch size, weights, and momentums, etc. I can downsample by factors of 10, 100, or more and throw huge minibatches around. If nothing changes the results, I have bad data. The data doesn’t clearly indicate the stuff I’m looking to measure.
However, I know the signal is in there. That makes this somewhat easier, because I know the problem is solvable. It just takes longer than one might think.