Fantastic Measures of Generalization — That Actually Work
In the next few posts, I am going to discuss how to use the generalization metrics included in the open-source weightwatcher tool. The goal is to develop a general-purpose tool can that you can use,...
View ArticleIs your layer over-fit? (part 2)
Say you are training a Deep Neural Network (DNN), and you see your model is over-trained. Or just not performing well. Is there a way to detect which layer is actually over-trained? (or over-fit, as...
View ArticleBetter than BERT: Pick your best model
Have you ever had to sort through HuggingFace to find your best model ? There are over 54,000 models on HuggingFace! So it’s not an easy task. Most people just choose the most popular model–and this...
View ArticleDeep Learning and Effective Correlation Spaces
AI has taken the world by storm. With recent advances like AlphaFold, Stable Diffusion, and ChatGPT, Deep Neural Networks (DNNs) have had their Sputnik moment. And yet, we really don’t understand why...
View ArticleWeightWatcher 0.7: March 2023
First, let me say thanks to all the users in our great community — we have reached over 93K downloads as of March 2023 ! The latest release of the open-source weightwatcher tool includes several...
View ArticleWeightWatcher new feature: fix_fingers=’clip_xmax’
WeightWatcher 0.7 has just been released, and it includes the new and improved advanced feature for analyzing Deep Neural Networks (DNN) called fix_fingers. To activate this, simply use: details =...
View ArticleEvaluating Fine-Tuned LLMs with WeightWatcher
if you are fine-tuning your own LLMs, you need a way to evaluate them. And while there are over a dozen popular methods to choose from, each of them are biased toward a specific, narrowly scoped...
View ArticleEvaluating Fine-Tuned LLMs with WeightWatcher Part II: PEFT / LoRa Models
Evaluating LLMs is hard. Especially when you don’t have a lot of test data.In the last post, we saw how to evaluate fine-tuned LLMs using the open-source weightwatcher tool. Specifically, we looked at...
View ArticleEvaluating LLMs with WeightWatcher Part III: The Magic of Mistral, a Story of...
Recently, the Mistral models have taken the LLM world by storm. The Mistral Mixture of Experts (MOE) 8x7b model outperforms other models in it’s weight class such as LLamA 2 70B and GPT 3.5. Here’s a...
View ArticleSVDSmoothing LLM Layers with WeightWatcher
Recently, Microsoft Research published the LASER method: ”Layer-Selective Rank Reduction” in this recent, very popular paper The Truth is in There: Improving Reasoning in Language Modelswith...
View Article
More Pages to Explore .....