<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Machine Learning on Daniel Lyons</title><link>https://dandylyons.net/topics/machine-learning/</link><description>Recent content in Machine Learning on Daniel Lyons</description><generator>Hugo -- gohugo.io</generator><language>en-gb</language><lastBuildDate>Mon, 05 May 2025 09:28:59 -0600</lastBuildDate><atom:link href="https://dandylyons.net/topics/machine-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>AI Is Not Logical, It's Probable</title><link>https://dandylyons.net/thoughts/ai-is-not-logical-its-probable/</link><pubDate>Mon, 05 May 2025 09:28:59 -0600</pubDate><guid>https://dandylyons.net/thoughts/ai-is-not-logical-its-probable/</guid><description>&lt;h2 id="sci-fi-vs-reality">Sci-Fi vs. Reality&lt;/h2>
&lt;p>For decades, science fiction painted a picture of AI as purely logical beings. Think of C-3PO, the protocol droid meticulously adhering to rules, or Data from Star Trek, striving to understand humanity through pure logic and data processing. We were led to believe AI would be predictable, rational, and perhaps a bit rigid in its adherence to algorithms.&lt;/p>
&lt;p>But the reality of modern AI, particularly the large language models (LLMs) powering many of today&amp;rsquo;s applications, is quite different. AI regularly surprises us with its creativity, humor, and even emotional depth, and yet AI often behaves in ways that are completely irrational or just straight-up wrong. How can this be?&lt;/p></description></item></channel></rss>