Welcome to my personal notes!

turns out i've been shuffling the wrong dimension of my data(through the model dim instead of the batch dim)


i think ive implemented auxk loss and topk activations correctly, but for auxk it is hard to know since neurons generally dont die till later in training

July 9, 2024

https://youtube.com/playlist?list=PLJ66BAXN6D8H_gRQJGjmbnS5qCWoxJNfe&si=XqBK6P6VRr9iJgFN

today's paper:

https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf

83% of my neurons are dead😔

i guess the new loss function was not enough

https://arxiv.org/abs/2406.04093v1

wish i would have seen this paper 2 days ago


openai uses same loss function as original (towards monosemanticity) anthropic paper, but new anthropic paper uses new one (which i implemented and resulted in hella dead neurons)

there must be something i am missing re: new anthropic method, since oai uses extra stuff (only uses topK activations, auxiliary loss)

July 8, 2024

i think i found the memory problem: the optimizer was about 8gb on the gpu


new personal site is up


next project after interpretability stuff will either be agents in video games or some kind of really quick diffusion model that is interactive


i need a better way to organize papers i want to read, maybe a page on my site would work

July 7, 2024

dataloader is super convoluted, but seems to be working so far

something is wrong though, my loss curve looks like a cosine function

model will probably have to train for a couple days... hopefully i did everything correct


i forgot that deleting files just puts them in trash, not actually deleted them

i have 1.3TB of deleted model activations in my trash

July 6, 2024

model is done, now working on efficient dataloader, which is much more of a challenge than i wouldve thought

July 3, 2024

the smallest SAE anthropic trained for golden gate claude had an internal dim of >1M

that is 256x the activation dim(for my model); the toy sae i trained was only 32x larger

may have to bring out the big guns later (cloud gpu)

hooray! they said no resampling was need when they use new sparsity penalty!

July 2, 2024

re: scaling up interp

i can now get the activations of layer N of mistral 7b on some tokens, now i just need a smart way of doing this efficiently while training SAE


https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

will definitely have to be more disciplined re: training of SAE to make sure i get rid of dead neurons

internal dim of mistral7b is 4096, which is still not super big, so THEORETICALLY model should not take too long to train

long term goal for this project is to train model for each layer (32 in total) and release some kind of interactive site where you can play with activating different features

goal for this week is just to get a single layer trained

good name for this project is "Golden Gate 7b"

July 1, 2024

taking a break from arc-agi today, gonna get mistral-7b + training data set up to scale up sparse autoencoder

am having a hard time finding a pure pytorch implementation of mistral-7b (need to be have fine control over individual layers so i can access activations)

implementing it myself might be the move

June 30, 2024

finished basic data augmentation + tokenizer, will try some experiments to see if these improve performance


blog post is done, some time this week i'll ship new site and start on scaling interpretability stuff to bigger open source models

June 27, 2024

not getting anywhere with mcts, predicting whether a solution is right in a single step is just as hard as base problem, and determining whether a solution is a bit better than another is hard

maybe will return to it at some point

i definitely still like the idea of training on specific example at inference time though


ok with new strategy, am getting 60% of pixels right (for the first task, will move to others when i start seeing better results)

this is pretty terrible considering that random guessing would do only slightly worse

gives me a baseline though


i think something that will probably have an outsized impact is how im doing tokenization/preparing inputs

June 26, 2024

ok website is pretty close to being done, as is the blog post

time to work on arc


current method not really working

will continue new strategy tomorrow

June 25, 2024

working on ARC

my model is buggin fr

loss is going to the moon 😭

architecture is way too complicated

maybe some kind of siamese network that i partially train at inference (one side is input, other is output)

once trained on examples, then search for output that makes test input work?


model can easily distinguish between random noise and actual answers (very easy)

while training, need more sophisticated way to generate incorrect answers (start with correct answer and apply random stuff)

June 24, 2024

re: arc

i'd like to use this as an excuse to try out combining mcts with normal deep learning stuff, so first step is probably just pure mcts

also starting out with the smaller puzzles (3x3) might help

mcts wont work alone though, becuase there is no way to tell if current leaf is the final solution, so you need some kind of model that determine if a solution is correct(might be just as hard as normal problem)

you need a model whose weights update with each example, and then can be given the test state along with a proposed solution resulting in a probability that it is correct

is this what a "liquid" neural net is

i suppose that for each task you could just optimize(normal gradient descent) over your examples, but there is no way it wouldn't overfit with only ~3 examples

might work if you use a tiny model, but that wouldn't have sufficient complexity for harder tasks

i think liquid neural nets could be the move

the paper is pretty dense tho

https://arxiv.org/pdf/2006.04439

June 23, 2024

gonna work on arc challenge before i try scaling up SAE to actual open source models (likely on 7b param models, though we'll see if i have the necessary compute)


new site is probably about 75% done, but i'd like to finish the blog post before i ship

June 22, 2024

i need to learn einsum

June 21, 2024

letting model train way after loss is improving may have worked, distribution seems to look better


found interpretable features!!!!

about 1/3 of them are totally dead, but the first one i looked at seems to be the end of a sentence followed by a new sentence that begins with "The"

the way i am looking at them is still super crude, but this is really promising

pretty much all of the features i have looked at so far correspond to single common words like "during", "of", "to"

nevermind, just found one that seems to be about passing rules:

> the US and Europe,__ signing__ a deal with Pharmaceutical

> the government__ signed__ a peace agreement with

> this month, the Senate__ launched__ its best-known

> Many women were reluctant to__ file__ complaints against their

the token with the underscores around it is the token the feature fired on most


reasonable summation would be that most features correspond to specific words, though some are more general and will fire for any synonym, which implies generalization!

i wouldn't expect to see many features for relationships more complex that single words, since the output of the actual model is not super coherent


based on some rough estimations, it seems like about 1/3 of the features are "interpretable", 1/3 are dead, and the rest are still kinda in superposition (they activate really often and on a bunch of seemingly unrelated tokens)

June 20, 2024

need to take a break from interp model (still getting weird artifacts in feature distributions), will work on website redesign

small chance that autoencoder isnt working bc it hasnt seen enough tokens, which is scary because if it is not true it will mean i have wasted like an entire day waiting for it to train


hilbert curve to make arc agi 1d so you can put it in temporal format

i didnt think of that its just a really cool idea

June 19, 2024

idk man, the distribution of activations is all goofy

this autoencoder way too sparse

holup i might be goated

June 18, 2024

is there anything better than waking up to a beautiful loss curve whose model has been training overnight

loss is still higher than i expected, though it makes sense since it is a single, pretty small layer

i am now wondering if my dataset is too uniform (findings in paper found features for other languages or base64, but i think my dataset is basically wikipedia-type tokens)

guess we'll see


some example output:

> It is only recently that he was compelled to return to Australia to prosper from self-government to wholesome and to cultures of central Australia.

> In Fremont County is a lush green town named according to an article published by Smithsonian magazine.

obviously doesn't make sense but there are still connections being made (*articles* are published by *magazines*)

also, there is sometimes other languages in the output, so those features will actually be there

time to start on the autoencoder!


autoencoder is being difficult, like 80% of the neurons are dead :(

trying to just reinitialize the weights for those every so often, but its lowkey buggin

June 17, 2024

re: training the single layer transformer, i could just use a pretrained one(like what the open source replication did), but i waited for like 5 hours yesterday to download a huge dataset, so i'd like to do it myself


ok should have fully trained model by tomorrow


https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt

ok nevermind this isn’t actually doing reasoning, just trying a bunch of solutions to see if it works


have basic training loop working, for model of this size i should probably add some more sophisticated stuff though (learning rate schedule, proper logging/val testing, early stopping)

i think this might be the first time training on a model has worked first try though

June 16, 2024

https://transformer-circuits.pub/2023/monosemantic-features#phenomenology-fsa

the html open/close tag circuit is so cool, i have always wondered how models keep track of syntax stuff like this when writing code


ok first step of replication is just training single layer transformer

definitely will be smaller than what was used in the paper, but i should hopefully still get some cool results


https://arxiv.org/pdf/2406.07394

need to be reading more MCTS stuff, my knowledge pretty much ends at what alphago used

June 15, 2024

sparse autoencoders could be the move


ok new project is recreating Towards Monosemanticity results, then eventually try to do the same for larger open source models (larger meaning ~7b params, though we'll see if i have enough compute even for that)

https://gwern.net/forking-path

June 14, 2024

ok remade the first experiment, definitely helped make everything more concrete


on a tiny model(single layer autoencoder), you can see that as sparsity increases, more features can be represented

more sparsity = more likely to only see a single feature per example

this is because models use polysemanticity and superposition (when a neuron encodes more than a single feature)

with a lot of sparsity, each feature is less and less orthogonal to others, hence what looks like noise outside of the diagonal


not sure if i will reimplement later parts of the paper, it gets kinda hairy and not super applicable to big models

but the above is pretty cool and shows why interpretability is so hard (lots of sparsity => superposition => messy neurons that encode lots of different things)


for the rest of today i want to finish this paper and then start on the toy monosemanticity one


chollet episode of dwarkesh pod has completely changed my outlook on the future of LLMs

LLMs are just memory, and we do not yet have logical reasoning

the fact that models can’t pass the ARC benchmark is very clear evidence of this, and i had never heard of it

June 13, 2024

papers (especially ones with less math notation) on the kindle is definitely the move

ok gonna try to recreate some of the visualizations from the "toy models of superposition" paper

June 12, 2024

a paper a day


today's paper: Gradient-based learning applied to document recognition (original CNN paper)

figure i should start out with things i am already familiar with to get better at reading papers in general


i am pretty sure this is from @varepsilon ideas for projects, but a command line tool that gives a public link to local images would be fun to build

would be pretty easy too


https://transformer-circuits.pub/2023/monosemantic-features

mech interp is so cool

https://transformer-circuits.pub/2022/toy_model/index.html

next project will be something to do with interpretability

once i finish reading some papers i will hopefully have a better idea of what it'll be

command line tool was way easier than i thought, literally just an imgur api wrapper

something more robust would be better, but i probably wont even put it on github, let alone putting it on a package manager

June 3, 2024

https://rubiks.tylercosgrove.com/

LGTM

runs slow but i am ready to work on something new

i think updating my personal website would be good, i am sick of it

June 1, 2024

checking if a move undoes previous one (plus some other little checks) reduces total moves checked by more than 10x

full algo is really quick now

maybe in the future i will go back and implement the loop to find more optimal paths, but i would rather have it run really quick than save a couple moves

max # of moves i've seen is 25, but theoretically it could produce a 30 move solve

30 should be the max though

May 31, 2024

the problem space of phase 2 is way bigger (permutations are coordinates 0 to 40k, orientation [phase 1] coordinates are just 0 to 2k)

time to find solution might even out though, since there are less available moves for phase 2 search

time will tell


phase 2 done

phase 2 moves can get pretty long, but i can work on that

algo is basically done!

i just need to go back and forth between phase 1 and 2 to get overall move count lower


not sure if i even need to do that though, move counts are in the low twenties, which is pretty good

going to integrate it into the opencv part now

May 30, 2024

phase 1 done

it is really fast too, i am so hype

it should be easy from here, since all i need to do is add move/prune tables for the rest of the coordinates and write phase 2 search(which is basically the same thing)

rn i am just using the first solution i find, when phase 2 is done, if solutions are too long, i can go back and find better solutions for the whole thing

but for a scrambled cube i am getting solutions around 7 moves, which is totally fine

May 29, 2024

now i can generate the move tables, so i could theoretically do phase 1

it would be insanely slow though, because the tables don't use the symmetries yet, and i haven't done the pruning tables


ok im gonna ignore symmetry for now and just do pruning on the normal coords

then i should be able to write a version of phase1, which will tell me if i really need to implement symmetry(if solving phase 1 takes a really long time)


theoretically adding symmetry shouldn't even be all that much faster, it just reduces the table sizes

i think


ok pruning table are finished

May 28, 2024

the coordinates for the cube got me buggin

am having a hard time wrapping my head around the symmetries

fortunately, seems like once i finish that, i can compute the tables for everything, which is probably most of the way there

May 27, 2024

got the coordinates + moves working (basic cube sim)

now i can begin on the actual search algo (the hard part)


no way this project is going to take me over a month

i need to lock in

May 25, 2024

looking like kociemba algo is the move

https://kociemba.org/twophase.htm

(korfs algo finds optimal solution, not ~solid solution quickly)


ok im gonna try to implement the alg, will probably end up being more challenging than extracting colors, but will be fun


https://near.blog/where-are-the-builders/

May 23, 2024

ok now i have a simple threejs 3d rendering of the cube so you can verify the scan was correct

thing is you have to scan the face in a certain order(rotate cube right x3, down x1, down twice x1)

if i make a little animation it should be simple enough to use though

ideally you'd be able to show the faces at random, but that would require having to keep track of each piece (have i seen the orange/white edge? if so, then i need to rotate the face)

maybe better left for a future iteration


when it comes to solving, ideally i would not only write the notation of the moves, but actually show it as an animation on the user's cube

but that means i need to have an actually good way of rendering the cube and moves, not just a threejs cube shape with a single texture on each face

before i do that i am just going to implement solving the cube and showing the moves in notation form


interesting that solving cubes in fewest moves is not a fully solved problem

korf's algo seems to the best, but it is from 97

https://www.cs.princeton.edu/courses/archive/fall06/cos402/papers/korfrubik.pdf

i wonder if deep learning techniques could work

well it is a "solved" problem in that you can always find the optimal solution, it just might take days(even on insane hardware)

May 20, 2024

ok extracting colors is probably good enough

now need to figure out how im gonna scan in entire cube, not just single side