The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

Episode List

LW - Glitch Token Catalog - (Almost) a Full Clear by Lao Mein

Sep 22nd, 2024 8:17 AM

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Glitch Token Catalog - (Almost) a Full Clear, published by Lao Mein on September 22, 2024 on LessWrong. This is a collection of every unidentified GPT2 glitch token listed in the third glitch token archaeology post. I was able to find the source of every single one, except for "?????-" and "?????-?????-"[1]. Please tell me if I missed one, or you've discovered one and don't understand where it came from. This isn't meant to be a well-written analysis, just a quick repository of my glitch-hunting observations. I plan on writing up and categorizing all of these in greater detail in future posts. The first of which is here. I used OpenWebText, a recreation of GPT2's training data, for all experiments in this post. I tokenized every .gz file in the archive and made a boolean Numpy array of each tokens that was present at least once. This allowed me to quickly identify infrequent tokens in the dataset and pull up the textual context with regular expressions. If there was an issue with overlap, I used a tokenizer-based extraction instead. All data/code available upon request. The leftmost column is token id, the middle is the token string, and the right column is # of files the token was present in (out of 20610). GPT2 has 50256 total tokens. GPT2 tokens with the lowest frequency in OpenWebText 30898 'embedreportprint' 0 33434 ' 士' 0 43453 ' SolidGoldMagikarp' 0 1849 '\xa0' 0 47654 ' \xa0\xa0' 0 50009 ' strutConnector' 0 36173 ' RandomRedditor' 0 214 '\x1a' 0 42424 'DragonMagazine' 0 180 ' ' 0 187 ' ' 0 186 ' ' 0 30213 ' externalToEVAOnly' 0 30212 ' externalToEVA' 0 30211 ' guiIcon' 0 185 ' ' 0 30210 ' guiActiveUnfocused' 0 30209 ' unfocusedRange' 0 184 ' ' 0 30202 ' guiName' 0 183 ' ' 0 30905 'rawdownload' 0 39906 'EStream' 0 33454 '龍喚士' 0 42586 ' srfN' 0 25992 ' 裏覚醒' 0 43065 ' srfAttach' 0 11504 ' \xa0 \xa0' 0 39172 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 40240 'oreAndOnline' 0 40241 'InstoreAndOnline' 0 33477 '\xa0\xa0\xa0' 0 36174 ' RandomRedditorWithNo' 0 37574 'StreamerBot' 0 46600 ' Adinida' 0 182 ' ' 0 29372 ' guiActiveUn' 0 43177 'EStreamFrame' 0 22686 ' \xa0 \xa0 \xa0 \xa0' 0 23282 ' davidjl' 0 47571 ' DevOnline' 0 39752 'quickShip' 0 44320 '\n\xa0' 0 8828 '\xa0\xa0\xa0\xa0' 0 39820 '龍 ' 0 39821 '龍契士' 0 28666 'PsyNetMessage' 0 35207 ' attRot' 0 181 ' ' 0 18472 ' guiActive' 0 179 ' ' 0 17811 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 20174 ' 裏 ' 0 212 '\x18' 0 211 '\x17' 0 210 '\x16' 0 209 '\x15' 0 208 '\x14' 0 31666 '?????-?????-' 0 207 '\x13' 0 206 '\x12' 0 213 '\x19' 0 205 '\x11' 0 203 '\x0f' 0 202 '\x0e' 0 31957 'cffffcc' 0 200 '\x0c' 0 199 '\x0b' 0 197 '\t' 0 196 '\x08' 0 195 '\x07' 0 194 '\x06' 0 193 '\x05' 0 204 '\x10' 0 45545 ' サーティワン' 0 201 '\r' 0 216 '\x1c' 0 37842 ' partName' 0 45706 ' \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0' 0 124 ' ' 0 125 ' ' 0 178 ' ' 0 41380 'natureconservancy' 0 41383 'assetsadobe' 0 177 ' ' 0 215 '\x1b' 0 41551 'Downloadha' 0 4603 '\xa0\xa0' 0 42202 'GoldMagikarp' 0 42089 ' TheNitrome' 0 217 '\x1d' 0 218 '\x1e' 0 42090 ' TheNitromeFan' 0 192 '\x04' 0 191 '\x03' 0 219 '\x1f' 0 189 '\x01' 0 45544 ' サーティ' 0 5624 ' \xa0' 0 190 '\x02' 0 40242 'BuyableInstoreAndOnline' 1 36935 ' dstg' 1 36940 ' istg' 1 45003 ' SetTextColor' 1 30897 'reportprint' 1 39757 'channelAvailability' 1 39756 'inventoryQuantity' 1 39755 'isSpecialOrderable' 1 39811 'soDeliveryDate' 1 39753 'quickShipAvailable' 1 39714 'isSpecial' 1 47198 'ItemTracker' 1 17900 ' Dragonbound' 1 45392 'dayName' 1 37579 'TPPStreamerBot' 1 31573 'ActionCode' 2 25193 'NetMessage' 2 39749 'DeliveryDate' 2 30208 ' externalTo' 2 43569 'ÍÍ' 2 34027 ' actionGroup' 2 34504 ' 裏 ' 2 39446 ' SetFontSize' 2 30899 'cloneembedreportprint' 2 32047 ' "$:/' 3 39803 'soType' 3 39177 'ItemThumbnailImage' 3 49781 'EngineDebug' 3 25658 '?????-' 3 33813 '=~=~' 3 48396 'ÛÛ' 3 34206 ...

LW - Investigating an insurance-for-AI startup by L Rudolf L

Sep 21st, 2024 11:59 PM

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Investigating an insurance-for-AI startup, published by L Rudolf L on September 21, 2024 on LessWrong. We (Flo & Rudolf) spent a month fleshing out the idea of an insurance-for-AI company. We talked to 15 people in the insurance industry, and did 20 customer interviews. We decided not to continue, but we think it's still a very promising idea and that maybe someone else should do this. This post describes our findings. The idea Theory of change To reduce AI risks, it would be good if we understood risks well, and if some organisation existed that could incentivise the use of safer AI practices. An insurance company that sells insurance policies for AI use cases has a financial incentive to understand concrete AI risks & harms well, because this feeds into its pricing. This company would also be incentivised to encourage companies to adopt safer AI practices, and could incentivise this by offering lower premiums in return. Like many cyber-insurance companies, it could also provide more general advice & consulting on AI-related risk reduction. Concrete path TL;DR: Currently, professionals (e.g. lawyers) have professional indemnity (PI) insurance. Right now, most AI tools involve the human being in the loop. But eventually, the AI will do the work end-to-end, and then the AI will be the one whose mistakes need to be insured. Currently, this insurance does not exist. We would start with law, but then expand to all other forms of professional indemnity insurance (i.e. insurance against harms caused by a professional's mistakes or malpractice in their work). Frontier labs are not good customers for insurance, since their size means they mostly do not need external insurance, and have a big information advantage in understanding the risk. Instead, we would target companies using LLMs (e.g. large companies that use specific potentially-risky AI workflows internally), or companies building LLM products for a specific industry. We focused on the latter, since startups are easier to sell to. Specifically, we wanted a case where: LLMs were being used in a high-stakes industry like medicine or law there were startups building LLM products in this industry there is some reason why the AI might cause legal liability, for example: the LLM tools are sufficiently automating the work that the liability is plausibly on them rather than the humans AI exceptions in existing insurance policies exist (or will soon exist) The best example we found was legal LLM tools. Law involves important decisions and large amounts of money, and lawyers can be found liable in legal malpractice lawsuits. LLMs are close to being able to do much legal work end-to-end; in particular, if the work is not checked by a human before being shipped, it is uncertain if existing professional indemnity (PI) insurance applies. People who work in law and law tech are also, naturally, very liability-aware. Therefore, our plan was: Become a managing general agent (MGA), a type of insurance company that does not pay claims out of its own capital (but instead finds a reinsurer to agree to pay them, and earns a cut of the premiums). Design PI policies for AI legal work, and sell these policies to legal AI startups (to help them sell to their law firm customers), or directly to law firms buying end-to-end legal AI tools. As more and more legal work is done end-to-end by AI, more and more of the legal PI insurance market is AI insurance policies. As AI advances and AI insurance issues become relevant in other industries, expand to those industries (e.g. medicine, finance, etc.). Eventually, most of the world's professional indemnity insurance market (on the order of $10B-100B/year) has switched from insuring against human mistakes to insuring against AI mistakes. Along the way, provide consulting services for countless business...

LW - Applications of Chaos: Saying No (with Hastings Greer) by Elizabeth

Sep 21st, 2024 7:12 PM

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications of Chaos: Saying No (with Hastings Greer), published by Elizabeth on September 21, 2024 on LessWrong. Previously Alex Altair and I published a post on the applications of chaos theory, which found a few successes but mostly overhyped dead ends. Luckily the comments came through, providing me with an entirely different type of application: knowing you can't, and explaining to your boss that you can't. Knowing you can't Calling a system chaotic rules out many solutions and tools, which can save you time and money in dead ends not traveled. I knew this, but also knew that you could never be 100% certain a physical system was chaotic, as opposed to misunderstood. However, you can know the equations behind proposed solutions, and trust that reality is unlikely to be simpler[1] than the idealized math. This means that if the equations necessary for your proposed solution could be used to solve the 3-body problem, you don't have a solution. [[1] I'm hedging a little because sometimes reality's complications make the math harder but the ultimate solution easier. E.g. friction makes movement harder to predict but gives you terminal velocity.] I had a great conversation with trebuchet and math enthusiast Hastings Greer about how this dynamic plays out with trebuchets. Transcript Note that this was recorded in Skype with standard headphones, so the recording leaves something to be desired. I think it's worth it for the trebuchet software visuals starting at 07:00 My favorite parts: If a trebuchet requires you to solve the double pendulum problem (a classic example of a chaotic system) in order to aim, it is not a competition-winning trebuchet. Trebuchet design was solved 15-20 years ago; it's all implementation details now. This did not require modern levels of tech, just modern nerds with free time. The winning design was used by the Syrians during Arab Spring, which everyone involved feels ambivalent about. The national pumpkin throwing competition has been snuffed out by insurance issues, but local competitions remain. Learning about trebuchet modeling software. Explaining you can't One reason to doubt chaos theory's usefulness is that we don't need fancy theories to tell us something is impossible. Impossibility tends to make itself obvious. But some people refuse to accept an impossibility, and some of those people are managers. Might those people accept "it's impossible because of chaos theory" where they wouldn't accept "it's impossible because look at it"? As a test of this hypothesis, I made a Twitter poll asking engineers-as-in-builds-things if they had tried to explain a project's impossibility to chaos, and if it had worked. The final results were: 36 respondents who were engineers of the relevant type This is probably an overestimate. One respondee replied later that he selected this option incorrectly, and I suspect that was a common mistake. I haven't attempted to correct for it as the exact percentage is not a crux for me. 6 engineers who'd used chaos theory to explain to their boss why something was impossible. 5 engineers who'd tried this explanation and succeeded. 1 engineer who tried this explanation and failed. 5/36 is by no means common, but it's not zero either, and it seems like it usually works. My guess is that usage is concentrated in a few subfields, making chaos even more useful than it looks. My sample size isn't high enough to trust the specific percentages, but as an existence proof I'm quite satisfied. Conclusion Chaos provides value both by telling certain engineers where not to look for solutions to their problems, and by getting their bosses off their back about it. That's a significant value add, but short of what I was hoping for when I started looking into Chaos. Thanks for listening. To help us out with The Nonlinear Library ...

LW - Work with me on agent foundations: independent fellowship by Alex Altair

Sep 21st, 2024 4:54 PM

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Work with me on agent foundations: independent fellowship, published by Alex Altair on September 21, 2024 on LessWrong. Summary: I am an independent researcher in agent foundations, and I've recently received an LTFF grant to fund someone to do research with me. This is a rolling application; I'll close it whenever I'm no longer interested in taking another person. If you're not familiar with agent foundations, you can read about my views in this post. What the role might be like This role is extremely flexible. Depending on who you are, it could end up resembling an internship, a research assistant position, a postdoc or even as a mentor/advisor to me. Below, I've listed out the parameters of the fellowship that I am using as a baseline of what it could be. All of these parameters are negotiable! $25 per hour. This is not a lot for people who live in the SF Bay area, or who are used to industry salaries, but it looks to me like this is comparable to a typical grad student salary. 20 hours per week. I'd like this fellowship to be one of your main projects, and I think it can take quite a lot of "deep work" focus before one can make progress on the research problems.[1] 3 months, with a decent chance of extension. During my AI safety camp project, it took about 6 weeks to get people up to speed on all the parts of the agent structure problem. Ideally I could find someone for this role who is already closer to caught up (though I don't necessarily anticipate that). I'm thinking of this fellowship as something like an extended work-trial for potentially working together longer-term. That said, I think we should at least aim to get results by the end of it. Whether I'll decide to invite you to continue working with me afterwards depends on how our collaboration went (both technically and socially), how many other people I'm collaborating with at that time, and whether I think I have enough funds to support it. Remote, but I'm happy to meet in person. Since I'm independent, I don't have anything like an office for you to make use of. But if you happen to be in the SF Bay area, I'd be more than happy to have our meetings in person. I wake up early, so US eastern and European time zones work well for me (and other time zones too). Meeting 2-5 times per week. Especially in the beginning, I'd like to do a pretty large amount of syncing up. It can take a long time to convey all the aspects of the research problems. I also find that real-time meetings regularly generate new ideas. That said, some people find meetings worse for their productivity, and so I'll be responsive to your particular work style. An end-of-term write-up. It seems to take longer than three months to get results in the types of questions I'm interested in, but I think it's good practice to commit to producing a write-up of how the fellowship goes. If it goes especially well, we could produce a paper. What this role ends up looking like mostly depends on your experience level relative to mine. Though I now do research, I haven't gone through the typical academic path. I'm in my mid-thirties and have a proportional amount of life and career experience, but in terms of mathematics, I consider myself the equivalent of a second year grad student. So I'm comfortable leading this project and am confident in my research taste, but you might know more math than me. The research problems Like all researchers in agent foundations, I find it quite difficult to concisely communicate what my research is about. Probably the best way to tell if you will be interested in my research problems is to read other things I've written, and then have a conversation with me about it. All my research is purely mathematical,[2] rather than experimental or empirical. None of it involves machine learning per se, but the theorems should ...

LW - o1-preview is pretty good at doing ML on an unknown dataset by Håvard Tveit Ihle

Sep 20th, 2024 5:31 PM

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: o1-preview is pretty good at doing ML on an unknown dataset, published by Håvard Tveit Ihle on September 20, 2024 on LessWrong. Previous post: How good are LLMs at doing ML on an unknown dataset? A while back I ran some evaluation tests on GPT4o, Claude Sonnet 3.5 and Gemini advanced to see how good they were at doing machine learning on a completely novel, and somewhat unusual dataset. The data was basically 512 points in the 2D plane, and some of the points make up a shape, and the goal is to classify the data according to what shape the points make up. None of the models did better than chance on the original (hard) dataset, while they did somewhat better on a much easier version I made afterwards. With the release of o1-preview, I wanted to quickly run the same test on o1, just to see how well it did. In summary, it basically solved the hard version of my previous challenge, achieving 77% accuracy on the test set on its fourth submission (this increases to 91% if I run it for 250 instead of 50 epochs), which is really impressive to me. Here is the full conversation with ChatGPT o1-preview In general o1-preview seems like a big step change in its ability to reliably do hard tasks like this without any advanced scaffolding or prompting to make it work. Detailed discussion of results The architecture that o1 went for in the first round is essentially the same that Sonnet 3.5 and gemini went for, a pointnet inspired model which extracts features from each point independently. While it managed to do slightly better than chance on the training set, it did not do well on the test set. For round two, it went for the approach (which also Sonnet 3.5 came up with) of binning the points in 2D into an image, and then using a regular 2D convnet to classify the shapes. This worked somewhat on the first try. It completely overfit the training data, but got to an accuracy of 56% on the test data. For round three, it understood that it needed to add data augmentations in order to generalize better, and it implemented scaling, translations and rotations of the data. It also switched to a slightly modified resnet18 architecture (a roughly 10x larger model). However, it made a bug when converting to PIL image (and back to torch.tensor), which resulted in an error. For round four, o1 fixed the error and has a basically working solution, achieving an accuracy of 77% (which increases to 91% if we increase the number of epochs from 50 to 250, all still well within the alloted hour of runtime). I consider the problem basically solved at this point, by playing around with smaller variations on this, you can probably get a few more percentage points without any more insights needed. For the last round, it tried the standard approach of using the pretrained weights of resnet18 and freezing almost all the layers, which is an approach that works well on many problems, but did not work well in this case. The accuracy reduced to 41%. I guess these data are just too different from imagenet (which resnet18 is trained on) for this approach to work well. I would not have expected this to work, but I don't hold it that much against o1, as it is a reasonable thing to try. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free