AbstractPhila's picture
Open to Collab

AbstractPhila PRO

AbstractPhil

AI & ML interests

datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.

Recent Activity

replied to their post about 3 hours ago
geolip-bertenstein-v1 - 5 experts chosen. A collective of shared transformer aligned experts, not a mixture of experts. Similar to a MOE, but not quite. This first prototype won't have the full mailing projection relay system afforded by the geofractal router, but it will definitely be a solid prototype. It is not production ready yet, there needs to be a few upstream and downstream tools meant to consume and process the outputs to create useful representations. This model will be able to text respond, use whisper, see with dinolip, code with codebert, and process proteins using esm2_t33_650m_ur50. Our experts for the prototype are; google-bert/bert-large-uncased facebook/dinov2-large microsoft/codebert-base openai/whisper-large-v3 facebook/esm2_t33_650M_UR50 Not the smartest text model, but more than enough for this preliminary use case test setup. Text is predominantly meant to align and orient downward function, the entire machine is meant to be operated unilaterally as a collective, or independently through individual pairs requests via special token access. This model will be capable of substantial power and feats as a prototype. It will be capable of seeing and processing differential equations utilizing dinov2 and esm2 data simultaneously, which can be used for downstream analysis - and I WILL use that data to create a more powerful connection between dinov2 tokens, protein tokens, video tokens, code tokens, and audio tokens. This is the FIRST prototype of this case, and I will introduce video, genetics, shape analysis, pattern recognition processing, and a much more powerful and reusable text model. The tests show the models can have differential communication through the geolip transformers after procrustes pairwise analysis and pentachoron CV protective measures. Whitening procrustes for precalculation and center-aligning allows for a faster convergence, so that should help too.
replied to their post about 6 hours ago
geolip-bertenstein-v1 - 5 experts chosen. A collective of shared transformer aligned experts, not a mixture of experts. Similar to a MOE, but not quite. This first prototype won't have the full mailing projection relay system afforded by the geofractal router, but it will definitely be a solid prototype. It is not production ready yet, there needs to be a few upstream and downstream tools meant to consume and process the outputs to create useful representations. This model will be able to text respond, use whisper, see with dinolip, code with codebert, and process proteins using esm2_t33_650m_ur50. Our experts for the prototype are; google-bert/bert-large-uncased facebook/dinov2-large microsoft/codebert-base openai/whisper-large-v3 facebook/esm2_t33_650M_UR50 Not the smartest text model, but more than enough for this preliminary use case test setup. Text is predominantly meant to align and orient downward function, the entire machine is meant to be operated unilaterally as a collective, or independently through individual pairs requests via special token access. This model will be capable of substantial power and feats as a prototype. It will be capable of seeing and processing differential equations utilizing dinov2 and esm2 data simultaneously, which can be used for downstream analysis - and I WILL use that data to create a more powerful connection between dinov2 tokens, protein tokens, video tokens, code tokens, and audio tokens. This is the FIRST prototype of this case, and I will introduce video, genetics, shape analysis, pattern recognition processing, and a much more powerful and reusable text model. The tests show the models can have differential communication through the geolip transformers after procrustes pairwise analysis and pentachoron CV protective measures. Whitening procrustes for precalculation and center-aligning allows for a faster convergence, so that should help too.
published a dataset about 12 hours ago
AbstractPhil/bertenstein-v1
View all activity

Organizations

DeepGHS's profile picture Blog-explorers's profile picture BangumiBase's profile picture Abstract Powered Research's profile picture