It works BEAUTIFULLY

HOLY **** this 500k param stack of linear layers hit 79% val. I mean it's just a linear head but still, it works really well and with only a fraction of the active neurons.

I posted an updated train feed because I never expected it to work so well.

Prelim

This is a prototype linear stack meant to benchmark the newest CantorLinear layer implementation.

Preliminary MNIST trains show that CantorLinear embedded cantor-fingerprinted direct neuron learning mask with alpha weights;

  • increases accuracy by +-4% with commonly +2% over standard linear layers and reduces train time by about 4% worst -4% speed.

Preliminary MNIST trains show that CantorConv with cantor-fingerprinted learning mask

  • Accuracy is nearly identical to traditional +-2% either way depending on noise

I will be testing cantor resnet by feeding it imagenet orderly features from my repos to see how it fares.

https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/train/model/layers/linear.py

While testing this stack I have a prototype for a second cantor linear layer I'll be testing.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support