Dan Lyke 19:34:22+0000 (2025-03-07)— twitter (1/0) facebook (0/0) flutterby (1/1) — Lat,Lon: (38.248,-122.645)

Whoops, that discussion is at https://mastodon.de/@ErikUden/114122514403706066

Dan Lyke 19:34:01+0000 (2025-03-07)— twitter (1/0) facebook (0/0) flutterby (1/1) — Lat,Lon: (38.246,-122.645)

A morning discussion about a company that's creating LLM generated newsletters, and this comment about adding bots to a Skype channel is making me think about bot-free spaces, how to make sure that people aren't violating the social contract of creating and enforcing them. (Also, people must read way way slower than me to make Gmail's "summarize this email" remotely interesting. I, once again, do not get it.)

Dan Lyke 18:25:34+0000 (2025-03-07)— twitter (1/0) facebook (0/0) flutterby (1/1) — Lat,Lon: (38.246,-122.645)

With the news that _Google is expanding the "AI Overviews" in search mode_ (https://blog.google/products/search/ai-mode-search/), via _Pivot To AI which mentions that you can remove all facts from your search for an extra $20/month_ (https://pivot-to-ai.com/2025/03/07/oops-all-ai-google-replaces-search-with-the-ai-overview/), it's worth pointing out this example of Google uncritically stating joke content as though it's real. _Wikipedia explains the joke_ (https://en.wikipedia.org/wiki/Wild_haggis).

Dan Lyke 16:41:05+0000 (2025-03-07)— twitter (1/0) facebook (0/0) flutterby (1/1) — Lat,Lon: (38.2249,-122.628)

This entire thread, from the top, but the observation that LLMs are most useful for applications where truth is irrelevant and lying is (more?) effective for achieving the goals is landing this morning. https://thepit.social/@peter/114121763629051621