Need To Know News Roundup

Love is the Strongest Meme

A study showing that bonding voles had nearly identical patterns of brain activity which means that love is the strongest meme.

Noninvasive BCI for VR is coming fast

RIP Vice Media. February 2024 beats record of heat records. Reddit made a deal with Google for AI training data worth 60 million bucks. Googles Gemini enters the Culture Wars. Jonathan Haidt talking about the fragmentation of everything. The Dark Forest Book. 3D-Printer for Chocolate. Lab-grown 'beef rice'. Apocalyptic Optimism. AI Girlfriend Data-Harvesting Horrorshows. Tiny Quadrotor Learns to Fly in 18 Seconds. Re-Insurance profits rise 580%. New studies on AI and Tech impact on climate and environment. Four Fab Four Biopics from Sam Mendez. Synthballs for Musk. Minidocumentaries on streetartist collective Rocco and his Brothers and Grannies travelling beyond the limits of Red Dead Redemptions virtual world. AI-Emergence Voodoo. Neue Haas Grotest in PETSCII. OpenAI disrupted state-sponsored AI-misuse. Secret Mathematical Patterns in Bach's Music. Elle Cordova and inventions hanging out. Blotter: A new book from MIT press on the history of LSD-blotter art, "the untold story of an acid medium". Design Reviewed is a personal project dedicated to digitally preserving graphic design history and documenting the vast visual culture from the last century. A paper last year stated that emergence in AI is a "mirage". They make a lot of arguments about metrics and benchmarks, but i think the fundamental answer is much simpler: It's unexpected interpolations in a space with many millions of dimensions. Train an AI on a cat and a dog. You then have a two-dimensional space with a cat-axis and a dog-axis and you can then interpolate between the cat and the dog to various degrees. But you'll always get cats and dogs and cat-dog-chimeras. Now add a bird. Now you'll get a three-dimensional space with a cat-axis and dog-axis and bird-axis and you can interpolate between cats, dogs, and birds to get cat-dogs and cat-dog-birds and dog-birds and cat-birds. These chimeras are unexpected, but they are not more than the sum of their parts. Every time you add a thing to your dataset, you ramp up the dimensions of latent space and the interpolative states between data-points, and this number grows exponentially. If you then use metrics and benchmarks like standard tests for standard tasks, of course, you'll find "surprising" abilities that can solve this or that task that is not present in the dataset. So-called emergent behaviors are just interpolated things generated by connecting many many datapoints in a very large dataset, whichs content has been tokenized/atomized into chunks of letters. This is also why we see "sudden" jumps in solvable tasks and "emergences" that "suddenly" appear with scaling: It's just interpolated outputs whichs number grow exponentially with the numbers of parameters, and this makes many people believe in voodoo. This is why I always take stuff like Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" with two grains of salt. OpenAI "disrupted" malicious uses of AI by five state-affiliated threat actors including "two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard". Here's more at Microsoft's security blog. In their post, they say that OpenAI's "findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks". Research says otherwise. The published activity of those state actors doesn't read very sophisticated: "research various companies and cybersecurity tools", "translate technical papers", "retrieve publicly available information on multiple intelligence agencies and regional threat actors", "assist with coding", "open-source research into satellite communication protocols and radar imaging technology". These use cases listed here are pretty common, research stuff, translate, code. In contrast, research into AI hacks has found plenty of vulnerabilities against jailbreaking ranging from sophisticated persuasion techiques to automatic jailbreaking, with one of the latest papers showing that LLM Agents can Autonomously Hack Websites and "that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild". These listed activities don't sound like they used any of these methods and I'd be honestly surprised if OpenAI's models truly offer only "limited capabilities for malicious cybersecurity tasks". Hell yeah culture war content: Google Pauses Gemini's Image Generator After It Was Accused of Being Racist Against White People. More

Read more