Power Law Distributions in Deep Learning In a previous post, we saw that the Fully Connected (FC) layers of the most common pre-trained Deep Learning

Power Law Distributions in Deep Learning In a previous post, we saw that the Fully Connected (FC) layers of the most common pre-trained Deep Learning
Why Does Deep Learning Work ? If we could get a better handle on this, we could solve some very hard problems in Deep Learning. A
Why Deep Learning Works: Self Regularization in DNNs An early talk describing details in this paper Implicit Self-Regularization in Deep Neural Networks: Evidence from Random
I just got back from ICLR 2019 and presented 2 posters, (and Michael gave a great talk!) at the Theoretical Physics Workshop on AI. To
Please enjoy my video presentation on Geoff Hinton’s Capsule Networks. What they are, why they are important, and how they are implemented (in Keras) …
My Labor Day Holiday Blog: for those on email, I will add updates, answer questions, and make corrections over the next couple weeks. Thanks for