Fragments

Thoughts as they occur to me.

Unnecessary Knowledge

keep it lean

    From Sherlock Holmes:

    "His ignorance was as remarkable as his knowledge. Of contemporary literature, philosophy and politics he appeared to know next to nothing. Upon my quoting Thomas Carlyle, he inquired in the naivest way who he might be and what he had done. My surprise reached a climax, however, when I found incidentally that he was ignorant of the Copernican Theory and of the composition of the Solar System. That any civilized human being in this nineteenth century should not be aware that the earth travelled round the sun appeared to be to me such an extraordinary fact that I could hardly realize it.

    “You appear to be astonished,” he said, smiling at my expression of surprise. “Now that I do know it I shall do my best to forget it.”

    “To forget it!”

    “You see,” he explained, “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has a difficulty in laying his hands upon it. Now the skillful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.”

    “But the Solar System!” I protested.

    “What the deuce is it to me?” he interrupted impatiently; “you say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work.”

    AI Guardrails, personal information edition

      On the one hand, lets remove some personal information because

      https://news.ycombinator.com/item?id=42292960

      https://justine.lol/history/

      The Focus AI

      I’m back

        I’ve recently started up a company called The Focus AI.

        I’m going to keep writing up things here as I find them, and then cross posting them as it makes sense. Think of what’s happening over there as part of the new mailing list.



        Quality Code Swearing

        more profanity the better

          One of the most fundamental unanswered questions that has been bothering mankind during the Anthropocene is whether the use of swearwords in open source code is positively or negatively correlated with source code quality. To investigate this profound matter we crawled and analysed over 3800 C open source code containing English swearwords and over 7600 C open source code not containing swearwords from GitHub. Subsequently, we quantified the adherence of these two distinct sets of source code to coding standards, which we deploy as a proxy for source code quality via the SoftWipe tool developed in our group. We find that open source code containing swearwords exhibit significantly better code quality than those not containing swearwords under several statistical tests. We hypothesise that the use of swearwords constitutes an indicator of a profound emotional involvement of the programmer with the code and its inherent complexities, thus yielding better code based on a thorough, critical, and dialectic code analysis process.

          Bachelor’s Thesis of Jan Strehmel

          Vibe check

          who needs science

          On the web I started with chatgpt, and it turns out that it makes more sense than claude or gemini.

          Though gemini is way faster.

          In emacs I started with Zephyr, and you know? I think it just does the best with defining words. (Which is what I use the most in emacs)

          With coding, I started with claude in cursor, and man it really kills it. I switched to chatgpt and you know it just wasnt the same.

          So, basically, you just like the thing you used first and there's no objective measure to anything.

          The raven

            i heart ruby

              JavaScript has won, so of course I've been moving more into typescript and javascript. And these last few weeks as I've been going deeper into the world of AI and LLM I've been dipping into the python ecosystem. And… I'm not convinved.

              Obviously you need to go to where the libraries are but ruby is still the most delightful.

              rust to wasm to javascript

              I was poking around the implementation of the obsidian-extract-url plugin, and its written in rust but compiled and run as WASM inside of the obsidian plugin environment.

              Novel use case for WASM.

              Coding in one file

              Tailwind and Server Actions

              One file that contains design, layout, code, and remote server code.

              I'm not totally sure how to debug server actions but its a whole bunch of functionality in one place.

              No shifting between different files, just doing what you set out to do in one context.

              Thoughts on reading the llama 3.1 paper

              I read through the llama 3 paper. Some random toughts:

              The big model performs more or less as well as the other major models (GPT, Gemini, and Claude) but you can pull it down and fine tune it for your needs. This is a remarkable move I assume to undermine the competetive advantage of the big AI companies. It's means that you don't need 10 billion to enter the AI race in a deep way.

              It took 54 days running on 16,000 N100s. That is a lot of compute.

              During training, tens of thousands of GPUs may increase or decrease power consumption at the same time, for example, due to all GPUs waiting for checkpointing or collective communications to finish, or the startup or shutdown of the entire training job. When this happens, it can result in instant fluctuations of power consumption across the data center on the order of tens of megawatts, stretching the limits of the power grid. This is an ongoing challenge for us as we scale training for future, even larger Llama models.

              Moving data around, both training data and intermediate check point training checks, required a huge amount of engineering work. The Meta infrastructure – even ourside of the compute stuff – was instrumental to this amount of effort.

              One interesting observation is the impact of environmental factors on training performance at scale. For Llama 3 405B , we noted a diurnal 1-2% throughput variation based on time-of-day. This fluctuation is the result of higher mid-day temperatures impacting GPU dynamic voltage and frequency scaling.

              Sourcing quality input data seemed like it was all cobbled together. There was a bunch of work to pull data out of webpages.

              It's mostly trained on English input, and then a much smaller fraction of other languages. I would imagine that quality in English is much higher, and people who use the models in different languges would be at a disadvantage.

              It filtered out stuff I'd expect, like how to make a bomb or create a bioweapon, but I was surprised that it filtered out "sexual content" which it labeled under "adult content". So if sexuality is part of your life, don't expect the models to know anything about it.

              There's the general pre-training model, which was fed a sort of mismash of data. "Better quality input", whatever that objectively means at this sort of scale.

              Post-training is basically taking a whole bunch of expert human-produced data and making sure that the models answer in that sort of way. So the knowledged and whatever else that is embedded is sort of forced into it at that area.

              Pre-training then is like putting in the full corpus of how language works and the concepts that our languages have embedded. This is interesting in itself because it represents how we model the world in our communication, though it's fully capable of spitting out coherent bullshit it doesn't really have any of the "understanding of experts" that would differentiate knowing what you are talking about.

              The post-training is to put in capabilities that are actually useful – both in terms of elevating accepted knowledge, but also other capabilities like tool use. This sort of tuning seems like cheating, or at least a very pragmatic engineering method that "gets the model to produce the types of answers we want".

              The obvious thing is the -instruct variation, which adds things like "system prompt" and "agent" and "user", so you can layer on the chat interface that everyone knows and loves. But tool use and coding generation – it can spit out python code for evaluation when it needs a quantiative answer – are also part of that. I believe that this sort of post-training is of a different sort than the "process all of the words so I understand embedded conceptions in linguistic communication".

              The paper is also a sort of blueprint of what you'd need to do if you wanted to make your own foundation model. They didn't use necessarily the most advanced techniques – preferring to push the envelope on data quality and training time – but the results are working and I suppose in tune with the general "more data less clever" idea in AI.

              The methodolgy of training these things is probably well known by the experts out there, but if it was obfucated knowledge before it's no longer.