Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open to Collab
19
6
23
AbstractPhila
PRO
AbstractPhil
Follow
Felldude's profile picture
sh111111111111111's profile picture
Mescalamba's profile picture
83 followers
·
109 following
https://civitai.com/user/AbstractPhila
AbstractEyes
AI & ML interests
datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.
Recent Activity
updated
a model
25 minutes ago
AbstractPhil/geolip-SVAE
replied
to
their
post
1 day ago
By trying to disprove the Omega H2 battery I have discovered; * Each topology formed by the H2 battery is deviant, none have a uniformly shared substrate of behavior. They are each uniquely independent per training set all with perfect recon. * Image recon can be tracked and mapped, yielding a consistently mapped and response 16.77m vocabulary potential. In the current spectrum testing at around 5 million unicode bytes. * The model scale shows patch size is related to how much data you want the model to represent within the model itself, and this has yet to see a capacity to this day. The MSE recons and yields - and the more data fed, the more they yield. * The scaling principle shows that the model indefinitely scales upward and each level of the model can be iteratively captured upward to form deviant and uniformly consistent repeatable pathways of implicit codewise response, not just arbitrary bitwise recall. Meaningful implicit learned utility. * Image recon patch size should match the slice of image you want to represent, as it uses patch smoothing per patch internally from identity. * byte trigrams are channel-agnostic, they do not require a channel count just a formula for recall at nGram recall 99.6% for byte-by-byte representations. With those comes an adjacently capable codebook. * sentencepiece preliminary tests show validity and reconstruction just like the byte trigrams, using the new byte trigram this would be arbitrarily convenient to recon a codebook for the structure. * binary trees learn a uniformly potent and powerful gating mechanism that required further exploration, each of them produces direct responsive independent capacity and the responses are controllable. * ternary experiments show the models are directly responsive to -1, 0, +1 behavior, so the quantization is very much a valid potential. * preliminary tests with the H2O1 series of batteries show the models are responding similar to natural universal elements in the universe itself
replied
to
their
post
1 day ago
By trying to disprove the Omega H2 battery I have discovered; * Each topology formed by the H2 battery is deviant, none have a uniformly shared substrate of behavior. They are each uniquely independent per training set all with perfect recon. * Image recon can be tracked and mapped, yielding a consistently mapped and response 16.77m vocabulary potential. In the current spectrum testing at around 5 million unicode bytes. * The model scale shows patch size is related to how much data you want the model to represent within the model itself, and this has yet to see a capacity to this day. The MSE recons and yields - and the more data fed, the more they yield. * The scaling principle shows that the model indefinitely scales upward and each level of the model can be iteratively captured upward to form deviant and uniformly consistent repeatable pathways of implicit codewise response, not just arbitrary bitwise recall. Meaningful implicit learned utility. * Image recon patch size should match the slice of image you want to represent, as it uses patch smoothing per patch internally from identity. * byte trigrams are channel-agnostic, they do not require a channel count just a formula for recall at nGram recall 99.6% for byte-by-byte representations. With those comes an adjacently capable codebook. * sentencepiece preliminary tests show validity and reconstruction just like the byte trigrams, using the new byte trigram this would be arbitrarily convenient to recon a codebook for the structure. * binary trees learn a uniformly potent and powerful gating mechanism that required further exploration, each of them produces direct responsive independent capacity and the responses are controllable. * ternary experiments show the models are directly responsive to -1, 0, +1 behavior, so the quantization is very much a valid potential. * preliminary tests with the H2O1 series of batteries show the models are responding similar to natural universal elements in the universe itself
View all activity
Organizations
AbstractPhil
's models
177
Sort:Â Recently updated
AbstractPhil/geolip-SVAE
Updated
3 minutes ago
•
2
AbstractPhil/geolip-hypersphere-experiments
Updated
3 days ago
AbstractPhil/geolip-svae-implicit-solver-experiments
Updated
8 days ago
AbstractPhil/geolip-svae-h2-64
Updated
8 days ago
•
217
AbstractPhil/geolip-svae-ablations
Updated
8 days ago
AbstractPhil/geolip-svae-batteries
Other
•
Updated
12 days ago
AbstractPhil/geolip-cvae-proto
Updated
13 days ago
AbstractPhil/geolip-svd-encoder-sweeps
Updated
15 days ago
AbstractPhil/geolip-spectral-cell
Updated
17 days ago
AbstractPhil/geolip-spectral-vit
Updated
17 days ago
AbstractPhil/geolip-conduit-experiments
Updated
22 days ago
AbstractPhil/geolip-svd-reconstitution
Updated
22 days ago
AbstractPhil/svae-freckles-4096
Updated
23 days ago
AbstractPhil/svae-freckles-256
Updated
23 days ago
AbstractPhil/geolip-omega-diffusion-128
Updated
24 days ago
AbstractPhil/svae-fresnel-128
Image-to-Image
•
Updated
25 days ago
•
115
AbstractPhil/svae-johanna-128
Updated
25 days ago
AbstractPhil/geolip-deep-embedding-analysis
Updated
27 days ago
AbstractPhil/geolip-transformer-v8
Updated
29 days ago
AbstractPhil/geolip-flow-predictions
Updated
about 1 month ago
AbstractPhil/eigh-triton
Updated
Apr 1
•
1
AbstractPhil/eig-triton
Updated
Apr 1
AbstractPhil/geometric-transformer-v3
Updated
Apr 1
AbstractPhil/geolip-geometric-transformer-v1
Updated
Mar 28
AbstractPhil/geolip-esm2_t33_650M_UR50D
Updated
Mar 28
•
137
AbstractPhil/svd-triton
Updated
Mar 25
AbstractPhil/geolip-core
Image Classification
•
Updated
Mar 24
AbstractPhil/geolip-constellation-core
Updated
Mar 21
AbstractPhil/geolip-cv-experiments
Updated
Mar 21
AbstractPhil/geolip-constellation-activations
Updated
Mar 18
Previous
1
2
3
...
6
Next